Nov 22 00:06:03 crc systemd[1]: Starting Kubernetes Kubelet... Nov 22 00:06:03 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:03 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 00:06:04 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 22 00:06:05 crc kubenswrapper[4928]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 00:06:05 crc kubenswrapper[4928]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 22 00:06:05 crc kubenswrapper[4928]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 00:06:05 crc kubenswrapper[4928]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 00:06:05 crc kubenswrapper[4928]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 22 00:06:05 crc kubenswrapper[4928]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.367918 4928 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373514 4928 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373556 4928 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373566 4928 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373576 4928 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373585 4928 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373593 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373600 4928 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373610 4928 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373619 4928 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373627 4928 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373635 4928 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373645 4928 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373655 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373664 4928 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373673 4928 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373681 4928 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373689 4928 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373696 4928 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373704 4928 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373712 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373722 4928 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373735 4928 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373744 4928 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373754 4928 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373762 4928 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373771 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373783 4928 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373819 4928 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373829 4928 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373837 4928 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373845 4928 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373854 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373862 4928 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373870 4928 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373878 4928 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373886 4928 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373894 4928 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373902 4928 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373910 4928 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373918 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373925 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373933 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373941 4928 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373949 4928 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373958 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373966 4928 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373974 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373982 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373991 4928 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.373999 4928 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374007 4928 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374016 4928 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374023 4928 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374031 4928 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374039 4928 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374046 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374054 4928 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374065 4928 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374075 4928 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374084 4928 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374093 4928 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374101 4928 feature_gate.go:330] unrecognized feature gate: Example Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374108 4928 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374116 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374123 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374132 4928 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374139 4928 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374147 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374155 4928 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374162 4928 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.374170 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375278 4928 flags.go:64] FLAG: --address="0.0.0.0" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375311 4928 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375333 4928 flags.go:64] FLAG: --anonymous-auth="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375347 4928 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375365 4928 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375377 4928 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375391 4928 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375406 4928 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375419 4928 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375431 4928 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375444 4928 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375456 4928 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375466 4928 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375479 4928 flags.go:64] FLAG: --cgroup-root="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375490 4928 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375501 4928 flags.go:64] FLAG: --client-ca-file="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375513 4928 flags.go:64] FLAG: --cloud-config="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375524 4928 flags.go:64] FLAG: --cloud-provider="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375535 4928 flags.go:64] FLAG: --cluster-dns="[]" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375547 4928 flags.go:64] FLAG: --cluster-domain="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375558 4928 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375569 4928 flags.go:64] FLAG: --config-dir="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375581 4928 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375594 4928 flags.go:64] FLAG: --container-log-max-files="5" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375609 4928 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375620 4928 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375632 4928 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375644 4928 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375655 4928 flags.go:64] FLAG: --contention-profiling="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375666 4928 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375677 4928 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375690 4928 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375700 4928 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375714 4928 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375725 4928 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375736 4928 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375747 4928 flags.go:64] FLAG: --enable-load-reader="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375759 4928 flags.go:64] FLAG: --enable-server="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375770 4928 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375785 4928 flags.go:64] FLAG: --event-burst="100" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375832 4928 flags.go:64] FLAG: --event-qps="50" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375846 4928 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375860 4928 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375872 4928 flags.go:64] FLAG: --eviction-hard="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375887 4928 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375898 4928 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375909 4928 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375921 4928 flags.go:64] FLAG: --eviction-soft="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375931 4928 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375943 4928 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375954 4928 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375964 4928 flags.go:64] FLAG: --experimental-mounter-path="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375977 4928 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375988 4928 flags.go:64] FLAG: --fail-swap-on="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.375999 4928 flags.go:64] FLAG: --feature-gates="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376012 4928 flags.go:64] FLAG: --file-check-frequency="20s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376023 4928 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376036 4928 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376048 4928 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376061 4928 flags.go:64] FLAG: --healthz-port="10248" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376076 4928 flags.go:64] FLAG: --help="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376089 4928 flags.go:64] FLAG: --hostname-override="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376100 4928 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376112 4928 flags.go:64] FLAG: --http-check-frequency="20s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376123 4928 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376135 4928 flags.go:64] FLAG: --image-credential-provider-config="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376150 4928 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376162 4928 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376173 4928 flags.go:64] FLAG: --image-service-endpoint="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376183 4928 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376193 4928 flags.go:64] FLAG: --kube-api-burst="100" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376205 4928 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376217 4928 flags.go:64] FLAG: --kube-api-qps="50" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376230 4928 flags.go:64] FLAG: --kube-reserved="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376242 4928 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376252 4928 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376265 4928 flags.go:64] FLAG: --kubelet-cgroups="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376276 4928 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376287 4928 flags.go:64] FLAG: --lock-file="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376298 4928 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376310 4928 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376321 4928 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376339 4928 flags.go:64] FLAG: --log-json-split-stream="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376352 4928 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376364 4928 flags.go:64] FLAG: --log-text-split-stream="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376380 4928 flags.go:64] FLAG: --logging-format="text" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376390 4928 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376403 4928 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376417 4928 flags.go:64] FLAG: --manifest-url="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376429 4928 flags.go:64] FLAG: --manifest-url-header="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376445 4928 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376458 4928 flags.go:64] FLAG: --max-open-files="1000000" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376474 4928 flags.go:64] FLAG: --max-pods="110" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376487 4928 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376500 4928 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376511 4928 flags.go:64] FLAG: --memory-manager-policy="None" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376522 4928 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376534 4928 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376571 4928 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376584 4928 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376614 4928 flags.go:64] FLAG: --node-status-max-images="50" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376625 4928 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376636 4928 flags.go:64] FLAG: --oom-score-adj="-999" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376648 4928 flags.go:64] FLAG: --pod-cidr="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376661 4928 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376682 4928 flags.go:64] FLAG: --pod-manifest-path="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376693 4928 flags.go:64] FLAG: --pod-max-pids="-1" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376704 4928 flags.go:64] FLAG: --pods-per-core="0" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376716 4928 flags.go:64] FLAG: --port="10250" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376728 4928 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376740 4928 flags.go:64] FLAG: --provider-id="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376752 4928 flags.go:64] FLAG: --qos-reserved="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376763 4928 flags.go:64] FLAG: --read-only-port="10255" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376777 4928 flags.go:64] FLAG: --register-node="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376789 4928 flags.go:64] FLAG: --register-schedulable="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376840 4928 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376862 4928 flags.go:64] FLAG: --registry-burst="10" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376874 4928 flags.go:64] FLAG: --registry-qps="5" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376885 4928 flags.go:64] FLAG: --reserved-cpus="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376897 4928 flags.go:64] FLAG: --reserved-memory="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376913 4928 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376924 4928 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376936 4928 flags.go:64] FLAG: --rotate-certificates="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376947 4928 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376960 4928 flags.go:64] FLAG: --runonce="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376974 4928 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376987 4928 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.376998 4928 flags.go:64] FLAG: --seccomp-default="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377010 4928 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377022 4928 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377034 4928 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377047 4928 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377060 4928 flags.go:64] FLAG: --storage-driver-password="root" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377071 4928 flags.go:64] FLAG: --storage-driver-secure="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377083 4928 flags.go:64] FLAG: --storage-driver-table="stats" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377096 4928 flags.go:64] FLAG: --storage-driver-user="root" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377108 4928 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377121 4928 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377132 4928 flags.go:64] FLAG: --system-cgroups="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377143 4928 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377163 4928 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377174 4928 flags.go:64] FLAG: --tls-cert-file="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377186 4928 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377199 4928 flags.go:64] FLAG: --tls-min-version="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377213 4928 flags.go:64] FLAG: --tls-private-key-file="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377226 4928 flags.go:64] FLAG: --topology-manager-policy="none" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377238 4928 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377251 4928 flags.go:64] FLAG: --topology-manager-scope="container" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377263 4928 flags.go:64] FLAG: --v="2" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377282 4928 flags.go:64] FLAG: --version="false" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377297 4928 flags.go:64] FLAG: --vmodule="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377312 4928 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.377325 4928 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377588 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377608 4928 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377621 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377635 4928 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377645 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377658 4928 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377668 4928 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377680 4928 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377691 4928 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377703 4928 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377715 4928 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377725 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377736 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377746 4928 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377757 4928 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377770 4928 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377783 4928 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377827 4928 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377841 4928 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377852 4928 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377866 4928 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377878 4928 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377889 4928 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377901 4928 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377912 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377922 4928 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377932 4928 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377946 4928 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377957 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377967 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377977 4928 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377987 4928 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.377998 4928 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378008 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378022 4928 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378035 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378046 4928 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378057 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378067 4928 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378078 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378090 4928 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378101 4928 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378113 4928 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378126 4928 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378137 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378147 4928 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378157 4928 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378167 4928 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378178 4928 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378188 4928 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378198 4928 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378209 4928 feature_gate.go:330] unrecognized feature gate: Example Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378219 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378229 4928 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378239 4928 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378249 4928 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378260 4928 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378270 4928 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378280 4928 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378291 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378302 4928 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378312 4928 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378322 4928 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378332 4928 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378342 4928 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378352 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378362 4928 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378372 4928 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378382 4928 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378392 4928 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.378434 4928 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.378454 4928 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.395466 4928 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.395523 4928 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397064 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397103 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397117 4928 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397129 4928 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397143 4928 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397157 4928 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397172 4928 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397182 4928 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397192 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397202 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397213 4928 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397222 4928 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397233 4928 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397246 4928 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397257 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397268 4928 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397279 4928 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397289 4928 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397299 4928 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397309 4928 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397320 4928 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397329 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397342 4928 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397354 4928 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397364 4928 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397374 4928 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397384 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397398 4928 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397412 4928 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397424 4928 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397435 4928 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397446 4928 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397457 4928 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397466 4928 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397477 4928 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397487 4928 feature_gate.go:330] unrecognized feature gate: Example Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397497 4928 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397507 4928 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397516 4928 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397526 4928 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397534 4928 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397545 4928 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397555 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397563 4928 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397572 4928 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397581 4928 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397595 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397606 4928 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397615 4928 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397625 4928 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397634 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397647 4928 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397658 4928 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397669 4928 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397678 4928 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397687 4928 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397697 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397707 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397716 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397726 4928 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397734 4928 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397743 4928 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397755 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397763 4928 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397774 4928 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397783 4928 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397821 4928 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397832 4928 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397841 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397850 4928 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.397859 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.397876 4928 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398157 4928 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398173 4928 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398183 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398192 4928 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398201 4928 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398210 4928 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398219 4928 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398232 4928 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398245 4928 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398257 4928 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398268 4928 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398278 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398287 4928 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398299 4928 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398311 4928 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398322 4928 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398332 4928 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398342 4928 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398352 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398362 4928 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398373 4928 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398382 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398392 4928 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398401 4928 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398411 4928 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398424 4928 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398436 4928 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398478 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398488 4928 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398497 4928 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398506 4928 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398516 4928 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398524 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398533 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398541 4928 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398551 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398560 4928 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398572 4928 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398582 4928 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398592 4928 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398606 4928 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398618 4928 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398629 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398638 4928 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398647 4928 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398655 4928 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398663 4928 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398674 4928 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398682 4928 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398692 4928 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398701 4928 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398711 4928 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398721 4928 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398730 4928 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398741 4928 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398749 4928 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398757 4928 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398765 4928 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398774 4928 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398782 4928 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398834 4928 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398844 4928 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398854 4928 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398862 4928 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398870 4928 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398879 4928 feature_gate.go:330] unrecognized feature gate: Example Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398887 4928 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398896 4928 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398904 4928 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398912 4928 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.398920 4928 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.398934 4928 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.399267 4928 server.go:940] "Client rotation is on, will bootstrap in background" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.405745 4928 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.405941 4928 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.407649 4928 server.go:997] "Starting client certificate rotation" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.407700 4928 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.408918 4928 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-03 20:40:29.760079364 +0000 UTC Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.409030 4928 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1028h34m24.351053693s for next certificate rotation Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.438005 4928 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.440992 4928 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.461531 4928 log.go:25] "Validated CRI v1 runtime API" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.505952 4928 log.go:25] "Validated CRI v1 image API" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.508203 4928 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.514146 4928 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-22-00-01-14-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.514189 4928 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.531376 4928 manager.go:217] Machine: {Timestamp:2025-11-22 00:06:05.529636514 +0000 UTC m=+0.817667705 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c769a56a-a12f-4ddf-8e71-bd2116048a86 BootID:a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1a:24:9c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1a:24:9c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:75:d2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a3:32:db Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:67:e3:76 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:21:d1:c8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9e:06:a0:60:e1:d2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3a:01:90:27:c6:d7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.531670 4928 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.531852 4928 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.532408 4928 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.532682 4928 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.532728 4928 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.533888 4928 topology_manager.go:138] "Creating topology manager with none policy" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.533912 4928 container_manager_linux.go:303] "Creating device plugin manager" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.534455 4928 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.534494 4928 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.534868 4928 state_mem.go:36] "Initialized new in-memory state store" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.534999 4928 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.538591 4928 kubelet.go:418] "Attempting to sync node with API server" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.538615 4928 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.538645 4928 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.538665 4928 kubelet.go:324] "Adding apiserver pod source" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.538679 4928 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.543583 4928 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.544618 4928 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.546645 4928 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548558 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548591 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548602 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548611 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548627 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548637 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548647 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548663 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548673 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548683 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548704 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.548714 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.549555 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.549581 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.549703 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.549736 4928 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.549724 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.550313 4928 server.go:1280] "Started kubelet" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.552139 4928 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.552665 4928 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.552652 4928 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 22 00:06:05 crc systemd[1]: Started Kubernetes Kubelet. Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.553760 4928 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.557490 4928 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.557556 4928 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.557950 4928 server.go:460] "Adding debug handlers to kubelet server" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.557989 4928 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.558023 4928 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.558296 4928 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.558298 4928 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:31:04.393458434 +0000 UTC Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.559370 4928 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.560551 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.560692 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.562433 4928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.562755 4928 factory.go:55] Registering systemd factory Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.562785 4928 factory.go:221] Registration of the systemd container factory successfully Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.566456 4928 factory.go:153] Registering CRI-O factory Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.566565 4928 factory.go:221] Registration of the crio container factory successfully Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.566754 4928 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.566916 4928 factory.go:103] Registering Raw factory Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.569418 4928 manager.go:1196] Started watching for new ooms in manager Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.568648 4928 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a2b705620a3dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 00:06:05.550281693 +0000 UTC m=+0.838312894,LastTimestamp:2025-11-22 00:06:05.550281693 +0000 UTC m=+0.838312894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.573463 4928 manager.go:319] Starting recovery of all containers Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.576084 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.576153 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.576173 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579282 4928 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579362 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579419 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579449 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579512 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579536 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579565 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579600 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579629 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579653 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579675 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579723 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579748 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579772 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579829 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579900 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579929 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.579984 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580022 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580058 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580150 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580206 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580305 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580337 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580390 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580424 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580477 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580506 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580545 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580575 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580609 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580640 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580836 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580874 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580904 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580935 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580966 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.580997 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581027 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581059 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581094 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581121 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581165 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581193 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581223 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581253 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581311 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581340 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581416 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581458 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581499 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581580 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581614 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581644 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581666 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581688 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581727 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581747 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581772 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581873 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581904 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581935 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581960 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.581988 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582053 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582082 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582108 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582129 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582170 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582200 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582229 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582301 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582383 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582415 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582444 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582472 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582499 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582523 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582545 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582568 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582619 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582671 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582694 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582717 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582750 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582773 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582934 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.582961 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583012 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583036 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583057 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583078 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583099 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583121 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583141 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583162 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583221 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583251 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583282 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583313 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583341 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583373 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583461 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583500 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583533 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583561 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583587 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583613 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583639 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583727 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583760 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583789 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583854 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583882 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583909 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.583982 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584007 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584034 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584063 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584095 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584122 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584147 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584173 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584200 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584227 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584249 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584304 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584334 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584410 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584481 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584510 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584536 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584560 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584589 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584613 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584635 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584663 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584687 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584711 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584736 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584760 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584786 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584844 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584872 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584898 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584942 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584969 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.584998 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585026 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585056 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585084 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585145 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585178 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585205 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585229 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585253 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585282 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585312 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585345 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585372 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585395 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585422 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585463 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585486 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585509 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585532 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585555 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585581 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585608 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585631 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585659 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585683 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585736 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585762 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585787 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585845 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585868 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585895 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585920 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585968 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.585992 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586016 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586044 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586072 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586105 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586134 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586161 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586188 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586213 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586238 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586269 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586296 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586324 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586350 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586377 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586406 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586435 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586463 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586491 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586517 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586547 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586573 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586600 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586626 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586653 4928 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586677 4928 reconstruct.go:97] "Volume reconstruction finished" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.586697 4928 reconciler.go:26] "Reconciler: start to sync state" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.606629 4928 manager.go:324] Recovery completed Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.614567 4928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.616912 4928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.617009 4928 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.617043 4928 kubelet.go:2335] "Starting kubelet main sync loop" Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.617161 4928 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.617669 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.619118 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.619208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.619268 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: W1122 00:06:05.620049 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.620331 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.621908 4928 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.621940 4928 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.621962 4928 state_mem.go:36] "Initialized new in-memory state store" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.644564 4928 policy_none.go:49] "None policy: Start" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.645600 4928 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.645650 4928 state_mem.go:35] "Initializing new in-memory state store" Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.660095 4928 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707016 4928 manager.go:334] "Starting Device Plugin manager" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707148 4928 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707168 4928 server.go:79] "Starting device plugin registration server" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707709 4928 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707733 4928 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707888 4928 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707980 4928 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.707987 4928 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.714398 4928 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.717539 4928 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.717670 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.718680 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.718729 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.718740 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.718913 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719345 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719427 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719584 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719621 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719634 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719828 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719940 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.719990 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.720908 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.720951 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.720962 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721351 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721369 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721381 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721389 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721392 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721399 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721561 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721633 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.721659 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.722428 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.722452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.722460 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.722603 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.722835 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.722885 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723349 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723361 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723636 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723658 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723854 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723895 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.723920 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.724149 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.724185 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.725286 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.725313 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.725322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.763367 4928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789028 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789096 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789148 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789260 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789306 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789332 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789349 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789368 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789438 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789482 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789508 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789526 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789543 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789577 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.789637 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.808211 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.809825 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.809874 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.809891 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.809932 4928 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 00:06:05 crc kubenswrapper[4928]: E1122 00:06:05.810691 4928 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.891413 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.891998 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.892195 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.892337 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.892442 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.892491 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.891743 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.892110 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.892657 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.892481 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893073 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893155 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893224 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893257 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893323 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893353 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893414 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893449 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893512 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.893589 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894120 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894152 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894171 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894192 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894203 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894207 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894230 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894234 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894242 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:05 crc kubenswrapper[4928]: I1122 00:06:05.894368 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.011918 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.013858 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.013923 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.013944 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.013985 4928 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 00:06:06 crc kubenswrapper[4928]: E1122 00:06:06.014770 4928 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.057611 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.067671 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.086294 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.096089 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.101990 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:06 crc kubenswrapper[4928]: W1122 00:06:06.108323 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-07972b81f8979cd884c6a707d3fffa2de8b96447cd0155ef0b0f86fc7480ba6c WatchSource:0}: Error finding container 07972b81f8979cd884c6a707d3fffa2de8b96447cd0155ef0b0f86fc7480ba6c: Status 404 returned error can't find the container with id 07972b81f8979cd884c6a707d3fffa2de8b96447cd0155ef0b0f86fc7480ba6c Nov 22 00:06:06 crc kubenswrapper[4928]: W1122 00:06:06.112132 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-77b7ba9efdc9214d5d6fcd6d6c22ef8391d32a474f1c124cf520ef8b3c1ad12a WatchSource:0}: Error finding container 77b7ba9efdc9214d5d6fcd6d6c22ef8391d32a474f1c124cf520ef8b3c1ad12a: Status 404 returned error can't find the container with id 77b7ba9efdc9214d5d6fcd6d6c22ef8391d32a474f1c124cf520ef8b3c1ad12a Nov 22 00:06:06 crc kubenswrapper[4928]: W1122 00:06:06.128887 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6ae8aca3cde2952e919790cd88b9335380c93fe18363a91397eab3173620a82d WatchSource:0}: Error finding container 6ae8aca3cde2952e919790cd88b9335380c93fe18363a91397eab3173620a82d: Status 404 returned error can't find the container with id 6ae8aca3cde2952e919790cd88b9335380c93fe18363a91397eab3173620a82d Nov 22 00:06:06 crc kubenswrapper[4928]: W1122 00:06:06.131317 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5af877ec8e218f1af67eac5550098ea48fc0c8e5a775cc73e7232a729bb2014b WatchSource:0}: Error finding container 5af877ec8e218f1af67eac5550098ea48fc0c8e5a775cc73e7232a729bb2014b: Status 404 returned error can't find the container with id 5af877ec8e218f1af67eac5550098ea48fc0c8e5a775cc73e7232a729bb2014b Nov 22 00:06:06 crc kubenswrapper[4928]: E1122 00:06:06.164302 4928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.415850 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.417879 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.417928 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.417940 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.417976 4928 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 00:06:06 crc kubenswrapper[4928]: E1122 00:06:06.418551 4928 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.553279 4928 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.559308 4928 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:28:27.055091802 +0000 UTC Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.559394 4928 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 65h22m20.49570121s for next certificate rotation Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.622590 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0427654dd9c113f545e39b1a81f5e6aa355de36c20465392c00b0c0c42c83a55"} Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.623756 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6ae8aca3cde2952e919790cd88b9335380c93fe18363a91397eab3173620a82d"} Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.624921 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77b7ba9efdc9214d5d6fcd6d6c22ef8391d32a474f1c124cf520ef8b3c1ad12a"} Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.626713 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"07972b81f8979cd884c6a707d3fffa2de8b96447cd0155ef0b0f86fc7480ba6c"} Nov 22 00:06:06 crc kubenswrapper[4928]: I1122 00:06:06.627574 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5af877ec8e218f1af67eac5550098ea48fc0c8e5a775cc73e7232a729bb2014b"} Nov 22 00:06:06 crc kubenswrapper[4928]: W1122 00:06:06.633451 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:06 crc kubenswrapper[4928]: E1122 00:06:06.633535 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:06 crc kubenswrapper[4928]: W1122 00:06:06.702828 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:06 crc kubenswrapper[4928]: E1122 00:06:06.703425 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:06 crc kubenswrapper[4928]: W1122 00:06:06.829690 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:06 crc kubenswrapper[4928]: E1122 00:06:06.829826 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:06 crc kubenswrapper[4928]: E1122 00:06:06.965618 4928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Nov 22 00:06:07 crc kubenswrapper[4928]: W1122 00:06:07.035681 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:07 crc kubenswrapper[4928]: E1122 00:06:07.035880 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.219115 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.221241 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.221313 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.221329 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.221408 4928 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 00:06:07 crc kubenswrapper[4928]: E1122 00:06:07.222252 4928 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.553426 4928 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.632678 4928 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d" exitCode=0 Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.632752 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.632915 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.633891 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.633917 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.633926 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.635345 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.636066 4928 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8f49ab5a57258af6ebf338f99b142b303c6297ba3bd1d6fd1180ef70f07be521" exitCode=0 Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.636113 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.636202 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.636128 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8f49ab5a57258af6ebf338f99b142b303c6297ba3bd1d6fd1180ef70f07be521"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.636254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.636311 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.638041 4928 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84" exitCode=0 Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.638138 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.638257 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.638270 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.638312 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.638339 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.639854 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.639891 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.639905 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.644135 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.644185 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.644203 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.644218 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.644281 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.649952 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.649996 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.650008 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.651193 4928 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8" exitCode=0 Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.651430 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8"} Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.651505 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.654431 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.654475 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.654488 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:07 crc kubenswrapper[4928]: I1122 00:06:07.896166 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.047770 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.553781 4928 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:08 crc kubenswrapper[4928]: E1122 00:06:08.567243 4928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.658361 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.658426 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.660224 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.660258 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.660271 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.662301 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.662339 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.662356 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.662401 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.663609 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.663637 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.663647 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.667682 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.667710 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.667723 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.667733 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49"} Nov 22 00:06:08 crc kubenswrapper[4928]: W1122 00:06:08.670349 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:08 crc kubenswrapper[4928]: E1122 00:06:08.670411 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.671056 4928 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2d0f27b367b69b9449a57e942f29314b30bcbf987b113e93163adac3d5beef4d" exitCode=0 Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.671167 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.671196 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2d0f27b367b69b9449a57e942f29314b30bcbf987b113e93163adac3d5beef4d"} Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.671434 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.672034 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.672074 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.672089 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.674025 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.674066 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.674082 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.822764 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.824193 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.824215 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.824224 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:08 crc kubenswrapper[4928]: I1122 00:06:08.824248 4928 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 00:06:08 crc kubenswrapper[4928]: E1122 00:06:08.824660 4928 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Nov 22 00:06:08 crc kubenswrapper[4928]: W1122 00:06:08.873580 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:08 crc kubenswrapper[4928]: E1122 00:06:08.873714 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:09 crc kubenswrapper[4928]: W1122 00:06:09.506123 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Nov 22 00:06:09 crc kubenswrapper[4928]: E1122 00:06:09.506227 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.679413 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.679947 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549"} Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.680415 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.680506 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.680533 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682421 4928 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="66e102144df16586c53c5a2906c0540d2c10a28b773c60e3ffcf5f5304dacb08" exitCode=0 Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682587 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682594 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682621 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682643 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682643 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"66e102144df16586c53c5a2906c0540d2c10a28b773c60e3ffcf5f5304dacb08"} Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682701 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.682632 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.683954 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.683984 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.683995 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.684238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.684278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.684293 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.684363 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.684410 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.684436 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.685014 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.685079 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:09 crc kubenswrapper[4928]: I1122 00:06:09.685099 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.691042 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"52f5a8a1806734be5162a454c5faa6bb5dba16d8908d7422521c1c4c6e95bbd7"} Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.691105 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b57771fb9f7ac7fc2cbde7ab2bd0aacdd4e0ea396ec915ed9fb7acd87d7bd82"} Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.691119 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.691184 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.691123 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d611bed170e033441a0ada0f977cbf305383f2fc559122eae9c66375d4807655"} Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.691247 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"149a0664b676e3ba87ac6f19b27bb00eb77fa2b4763df9251282f8e527ecca13"} Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.692215 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.692267 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:10 crc kubenswrapper[4928]: I1122 00:06:10.692283 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.524730 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.524968 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.526187 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.526216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.526230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.698168 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7a9a396d655ef176af8db1478f44df9b4a1a366aef3c4406cb3a550b1131f8e6"} Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.698325 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.699481 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.699543 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.699569 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.993855 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.994058 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.994106 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.995444 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.995487 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:11 crc kubenswrapper[4928]: I1122 00:06:11.995502 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.024898 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.026575 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.026636 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.026648 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.026680 4928 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.701619 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.703338 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.703386 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.703402 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.923243 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.923455 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.923514 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.925460 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.925514 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:12 crc kubenswrapper[4928]: I1122 00:06:12.925532 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.052505 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.052773 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.052869 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.054297 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.054325 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.054337 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.490479 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.537681 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.704097 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.704199 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.705646 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.705648 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.705738 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.705767 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.705692 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:13 crc kubenswrapper[4928]: I1122 00:06:13.705851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:15 crc kubenswrapper[4928]: I1122 00:06:15.432723 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 22 00:06:15 crc kubenswrapper[4928]: I1122 00:06:15.433068 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:15 crc kubenswrapper[4928]: I1122 00:06:15.436583 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:15 crc kubenswrapper[4928]: I1122 00:06:15.436626 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:15 crc kubenswrapper[4928]: I1122 00:06:15.436638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:15 crc kubenswrapper[4928]: E1122 00:06:15.714599 4928 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 00:06:16 crc kubenswrapper[4928]: I1122 00:06:16.845168 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:16 crc kubenswrapper[4928]: I1122 00:06:16.845449 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:16 crc kubenswrapper[4928]: I1122 00:06:16.848032 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:16 crc kubenswrapper[4928]: I1122 00:06:16.848094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:16 crc kubenswrapper[4928]: I1122 00:06:16.848114 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:16 crc kubenswrapper[4928]: I1122 00:06:16.851935 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:17 crc kubenswrapper[4928]: I1122 00:06:17.717413 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:17 crc kubenswrapper[4928]: I1122 00:06:17.718701 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:17 crc kubenswrapper[4928]: I1122 00:06:17.718826 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:17 crc kubenswrapper[4928]: I1122 00:06:17.718854 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:19 crc kubenswrapper[4928]: I1122 00:06:19.553478 4928 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 22 00:06:19 crc kubenswrapper[4928]: W1122 00:06:19.659249 4928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 22 00:06:19 crc kubenswrapper[4928]: I1122 00:06:19.659743 4928 trace.go:236] Trace[780812393]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 00:06:09.657) (total time: 10002ms): Nov 22 00:06:19 crc kubenswrapper[4928]: Trace[780812393]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:19.659) Nov 22 00:06:19 crc kubenswrapper[4928]: Trace[780812393]: [10.002146772s] [10.002146772s] END Nov 22 00:06:19 crc kubenswrapper[4928]: E1122 00:06:19.659958 4928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 22 00:06:19 crc kubenswrapper[4928]: E1122 00:06:19.819900 4928 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187a2b705620a3dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 00:06:05.550281693 +0000 UTC m=+0.838312894,LastTimestamp:2025-11-22 00:06:05.550281693 +0000 UTC m=+0.838312894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 00:06:19 crc kubenswrapper[4928]: I1122 00:06:19.845754 4928 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 00:06:19 crc kubenswrapper[4928]: I1122 00:06:19.845995 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 00:06:20 crc kubenswrapper[4928]: I1122 00:06:20.087717 4928 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 00:06:20 crc kubenswrapper[4928]: I1122 00:06:20.087812 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 00:06:20 crc kubenswrapper[4928]: I1122 00:06:20.097302 4928 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 00:06:20 crc kubenswrapper[4928]: I1122 00:06:20.097388 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.482348 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.482746 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.484998 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.485074 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.485105 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.524255 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.731098 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.732227 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.732267 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.732279 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.750691 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.999406 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:21 crc kubenswrapper[4928]: I1122 00:06:21.999755 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.001942 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.002008 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.002032 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.006038 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.734128 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.734268 4928 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.735607 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.735837 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.735946 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.736013 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.736292 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:22 crc kubenswrapper[4928]: I1122 00:06:22.736346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:24 crc kubenswrapper[4928]: I1122 00:06:24.911353 4928 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.087020 4928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.090453 4928 trace.go:236] Trace[25826788]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 00:06:12.522) (total time: 12568ms): Nov 22 00:06:25 crc kubenswrapper[4928]: Trace[25826788]: ---"Objects listed" error: 12568ms (00:06:25.090) Nov 22 00:06:25 crc kubenswrapper[4928]: Trace[25826788]: [12.568206097s] [12.568206097s] END Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.090483 4928 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.090738 4928 trace.go:236] Trace[698319709]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 00:06:14.743) (total time: 10347ms): Nov 22 00:06:25 crc kubenswrapper[4928]: Trace[698319709]: ---"Objects listed" error: 10347ms (00:06:25.090) Nov 22 00:06:25 crc kubenswrapper[4928]: Trace[698319709]: [10.347169693s] [10.347169693s] END Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.090763 4928 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.090780 4928 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.092639 4928 trace.go:236] Trace[1736550693]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 00:06:12.744) (total time: 12348ms): Nov 22 00:06:25 crc kubenswrapper[4928]: Trace[1736550693]: ---"Objects listed" error: 12348ms (00:06:25.092) Nov 22 00:06:25 crc kubenswrapper[4928]: Trace[1736550693]: [12.348360713s] [12.348360713s] END Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.092667 4928 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.094969 4928 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.095184 4928 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.096388 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.096429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.096438 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.096459 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.096469 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.108264 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.112129 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.112181 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.112194 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.112220 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.112234 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.128147 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.135626 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.135686 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.135702 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.135761 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.135775 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.137206 4928 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60070->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.137270 4928 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60082->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.137283 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60070->192.168.126.11:17697: read: connection reset by peer" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.137340 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60082->192.168.126.11:17697: read: connection reset by peer" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.137668 4928 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.137720 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.138073 4928 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.138403 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.146560 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.151904 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.151958 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.151971 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.151996 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.152007 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.167818 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.171078 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.171117 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.171126 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.171147 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.171158 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.180523 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.180637 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.182566 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.182604 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.182615 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.182638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.182649 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.285153 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.285188 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.285198 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.285216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.285226 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.388377 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.388458 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.388472 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.388501 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.388519 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.490780 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.490847 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.490858 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.490878 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.490889 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.546444 4928 apiserver.go:52] "Watching apiserver" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.551227 4928 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.551635 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-dsr54","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.552120 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.552122 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.552238 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.552277 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.552296 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.552414 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.552440 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.552627 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.552664 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.553129 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.554582 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.554938 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.555172 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.555754 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.555944 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.556072 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.556418 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.556528 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.557202 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.557354 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.557377 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.557410 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.558959 4928 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.571588 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.582332 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.593695 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.593735 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.593746 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.593767 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.593780 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595043 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595087 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595110 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595130 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595152 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595172 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595191 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595214 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595244 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595266 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595289 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595312 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595338 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595359 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595381 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595407 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595435 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595461 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595483 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595504 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595556 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595578 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595602 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595623 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595648 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595643 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595668 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595725 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595762 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595782 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595825 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595847 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595872 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595885 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595898 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595927 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595952 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595972 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.595993 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596013 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596024 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596037 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596058 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596082 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596103 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596125 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596149 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596169 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596208 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596229 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596247 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596269 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596290 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596312 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596335 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596338 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596358 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596380 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596400 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596424 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596449 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596489 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596513 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596522 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596536 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596562 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596584 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596587 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596610 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596639 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596872 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596896 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596919 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596941 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596946 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596964 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596988 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.596993 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597017 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597043 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597066 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597121 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597138 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597148 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597178 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597207 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597212 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597233 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597258 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597284 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597302 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597308 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597357 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597381 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597401 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597452 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597479 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597498 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597594 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597626 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597652 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597681 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597706 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597724 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597742 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597760 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.597779 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598041 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598436 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598458 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598474 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598490 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598506 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598522 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598562 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598581 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598598 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598617 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598633 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598650 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598666 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598681 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598697 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598720 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598753 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598778 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598821 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598839 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598855 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598874 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598896 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598914 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598933 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598951 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598969 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598987 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599004 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599025 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599043 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599059 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599076 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599102 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599119 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599138 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599154 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599170 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599189 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599204 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599222 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599237 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599254 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599271 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599286 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599302 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599325 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599344 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599360 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599380 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599398 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599413 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599428 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599444 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599460 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599480 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599498 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599516 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599534 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599559 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599580 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599599 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599614 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599630 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599646 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599661 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599681 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599699 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599716 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599735 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599753 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599772 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599789 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599822 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599847 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599871 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599898 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599922 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599944 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599962 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599980 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599998 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600018 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600035 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600052 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600068 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600091 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600116 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600133 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600151 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600169 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600187 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600205 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600221 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600241 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600260 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600278 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600294 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600350 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600384 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600406 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600431 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600452 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2xg\" (UniqueName: \"kubernetes.io/projected/dd1144bf-61e9-4de0-a360-d6252730bbf3-kube-api-access-sw2xg\") pod \"node-resolver-dsr54\" (UID: \"dd1144bf-61e9-4de0-a360-d6252730bbf3\") " pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600471 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600495 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600516 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600541 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600562 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600588 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600611 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600629 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600648 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600666 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600685 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dd1144bf-61e9-4de0-a360-d6252730bbf3-hosts-file\") pod \"node-resolver-dsr54\" (UID: \"dd1144bf-61e9-4de0-a360-d6252730bbf3\") " pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600739 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600751 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600761 4928 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600770 4928 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600779 4928 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600806 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600818 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600828 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600837 4928 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600847 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598064 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598561 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598741 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.598933 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599315 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599319 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599495 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599664 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599847 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599945 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602322 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599983 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.599997 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600200 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600313 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600432 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600716 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600964 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600941 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.601644 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602057 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602123 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602144 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602254 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602577 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602607 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602699 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.602843 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604048 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604088 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604279 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604314 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604536 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604377 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604669 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604682 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604863 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604935 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604992 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.604955 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.605024 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.605033 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.605109 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607079 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606962 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607174 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606440 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606742 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607204 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606762 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607351 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606876 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607396 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607464 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.607533 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:06:26.107494618 +0000 UTC m=+21.395525809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607592 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607609 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607699 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607815 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607924 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.607998 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608148 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608238 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608298 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608417 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608464 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608488 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608544 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608582 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608847 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608875 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608962 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.609127 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.608660 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.609371 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.609382 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606325 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.609740 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.609982 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606620 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.611347 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.611184 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.611978 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.612222 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.612641 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.612949 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.613987 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.614305 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.614513 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.614546 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.614876 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.614709 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615163 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615305 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.600328 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615324 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615618 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615636 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615693 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615239 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615699 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615980 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.606172 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.616091 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.616425 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.616447 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.616451 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.616583 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.616633 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.616710 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.616707 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.617154 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.614256 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.617998 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.618458 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.618486 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.615593 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.620406 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.620507 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.620537 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.620712 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.620803 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.621078 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.621160 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.621529 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.621541 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.622148 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.622339 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.618489 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.622510 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.622744 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.622899 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:26.12284685 +0000 UTC m=+21.410878041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.622934 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:26.122917002 +0000 UTC m=+21.410948193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.625682 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.625742 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.626063 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.626137 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.631036 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.631513 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.632601 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.632865 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.633015 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.633230 4928 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.634449 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.634849 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.635328 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.635958 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.636087 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.636551 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.636757 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.637478 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.637568 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.637637 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.642120 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.646378 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.647711 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.648434 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.648712 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.652874 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.656145 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.660938 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.661493 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.662230 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.663440 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.664227 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.665201 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.665226 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.665262 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.665358 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:26.165309991 +0000 UTC m=+21.453341382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.665733 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.666489 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.669143 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.669574 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.670092 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.670867 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.671069 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.671270 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.671955 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.672989 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.673107 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.671933 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.675888 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.678041 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.678732 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.680905 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.681834 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.682422 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.683855 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.683929 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.684348 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.684887 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.686335 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.686955 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.688259 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.688291 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.688303 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:25 crc kubenswrapper[4928]: E1122 00:06:25.688366 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:26.188342415 +0000 UTC m=+21.476373606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.688762 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.689907 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.690723 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.691106 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.691298 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.691758 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.691773 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.692872 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.693160 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.693201 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.694525 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.695327 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.696070 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.697099 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.697988 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.698782 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.699382 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.700087 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.700367 4928 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.700540 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702026 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702061 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702071 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702088 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702098 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702344 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dd1144bf-61e9-4de0-a360-d6252730bbf3-hosts-file\") pod \"node-resolver-dsr54\" (UID: \"dd1144bf-61e9-4de0-a360-d6252730bbf3\") " pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702408 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702428 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2xg\" (UniqueName: \"kubernetes.io/projected/dd1144bf-61e9-4de0-a360-d6252730bbf3-kube-api-access-sw2xg\") pod \"node-resolver-dsr54\" (UID: \"dd1144bf-61e9-4de0-a360-d6252730bbf3\") " pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702446 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702530 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702540 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702551 4928 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702563 4928 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702572 4928 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702582 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702592 4928 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702600 4928 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702610 4928 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702618 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702627 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702633 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.703534 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dd1144bf-61e9-4de0-a360-d6252730bbf3-hosts-file\") pod \"node-resolver-dsr54\" (UID: \"dd1144bf-61e9-4de0-a360-d6252730bbf3\") " pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.703683 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.703703 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.703977 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.702636 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704514 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704525 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704552 4928 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704562 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704570 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704578 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704587 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704596 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704604 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704613 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704623 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704632 4928 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704641 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704649 4928 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704658 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704667 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704676 4928 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704685 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704694 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704703 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704712 4928 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704720 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704729 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704738 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704747 4928 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704755 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704765 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704803 4928 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704813 4928 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704821 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704831 4928 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704840 4928 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704849 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704857 4928 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704866 4928 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704875 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704890 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704899 4928 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704908 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704917 4928 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704925 4928 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704933 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704943 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704951 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704961 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704970 4928 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704980 4928 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.704989 4928 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.705011 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.705020 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.705329 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706617 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706640 4928 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706651 4928 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706661 4928 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706671 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706680 4928 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706689 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706697 4928 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706705 4928 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706714 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706722 4928 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706731 4928 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706740 4928 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706750 4928 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706759 4928 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706786 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706840 4928 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706852 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706865 4928 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706878 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706890 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706902 4928 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706912 4928 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706923 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706932 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706941 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706950 4928 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706959 4928 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706968 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706976 4928 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706987 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.706998 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.707241 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.708176 4928 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.708528 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709373 4928 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709409 4928 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709633 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709856 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709895 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709912 4928 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709926 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709937 4928 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709952 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709971 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709988 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.709999 4928 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710011 4928 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710021 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710035 4928 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710045 4928 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710056 4928 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710066 4928 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710076 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710087 4928 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710096 4928 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710296 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710827 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710843 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710854 4928 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710865 4928 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710877 4928 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710888 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710901 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710912 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710924 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710935 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710947 4928 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710927 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.710957 4928 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711043 4928 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711067 4928 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711084 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711096 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711111 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711124 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711139 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711153 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711171 4928 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711185 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711198 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711211 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711224 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711237 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711344 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711574 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711787 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712322 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.711250 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712782 4928 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712837 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712851 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712864 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712877 4928 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712894 4928 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712908 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712921 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712933 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712944 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712957 4928 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712969 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712982 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.712994 4928 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713007 4928 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713019 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713033 4928 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713044 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713056 4928 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713069 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713082 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713095 4928 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.713434 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.714217 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715040 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715142 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715262 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715279 4928 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715291 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715301 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715296 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715322 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715343 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715192 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.715506 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.716036 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.716545 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.717626 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.717919 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.718264 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.718762 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.719820 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.719958 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.721026 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.721547 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.722200 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.723222 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.726016 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.727052 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.728295 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.745036 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.753046 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2xg\" (UniqueName: \"kubernetes.io/projected/dd1144bf-61e9-4de0-a360-d6252730bbf3-kube-api-access-sw2xg\") pod \"node-resolver-dsr54\" (UID: \"dd1144bf-61e9-4de0-a360-d6252730bbf3\") " pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.753728 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.757656 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.758185 4928 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549" exitCode=255 Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.758314 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.768424 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.794157 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.804669 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.804924 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.805024 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.805110 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.805192 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.809833 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818192 4928 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818233 4928 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818249 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818263 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818276 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818288 4928 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818301 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818314 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818326 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818340 4928 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818354 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818367 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.818379 4928 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.825729 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hp7cp"] Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.826432 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.830144 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.830177 4928 scope.go:117] "RemoveContainer" containerID="ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.831239 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.831688 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.831934 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.832128 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.832306 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.833724 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.834376 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kx2zm"] Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.839252 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.839954 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xmk2r"] Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.840655 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.850558 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.850888 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.851120 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.851698 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.852459 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.852697 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.853997 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.866329 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.867771 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.875282 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.888189 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.888965 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.893292 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dsr54" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.901372 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.912109 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.912161 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.912174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.912194 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.912209 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:25Z","lastTransitionTime":"2025-11-22T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.914953 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.918892 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hpnj\" (UniqueName: \"kubernetes.io/projected/c474173e-56bf-467c-84e6-e473d89563c4-kube-api-access-6hpnj\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.918940 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/897ce64d-0600-4c6a-b684-0c1683fa14bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.918959 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-socket-dir-parent\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.918979 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.918993 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3487b13a-0c53-417a-a73c-4709af1f7a10-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919091 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/897ce64d-0600-4c6a-b684-0c1683fa14bc-rootfs\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919306 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-cnibin\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919428 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-hostroot\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919538 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjp9g\" (UniqueName: \"kubernetes.io/projected/3487b13a-0c53-417a-a73c-4709af1f7a10-kube-api-access-gjp9g\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919619 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-netns\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919708 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c474173e-56bf-467c-84e6-e473d89563c4-multus-daemon-config\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919819 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/897ce64d-0600-4c6a-b684-0c1683fa14bc-proxy-tls\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.919896 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-kubelet\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920001 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-os-release\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920091 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-cni-bin\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920242 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-etc-kubernetes\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920298 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-multus-certs\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920322 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-system-cni-dir\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920345 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-os-release\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920381 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-cni-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920400 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c474173e-56bf-467c-84e6-e473d89563c4-cni-binary-copy\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920418 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-cni-multus\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920441 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnb6x\" (UniqueName: \"kubernetes.io/projected/897ce64d-0600-4c6a-b684-0c1683fa14bc-kube-api-access-tnb6x\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920460 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-conf-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920482 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-k8s-cni-cncf-io\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920503 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-cnibin\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920522 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3487b13a-0c53-417a-a73c-4709af1f7a10-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.920552 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-system-cni-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.930803 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: W1122 00:06:25.949992 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-552c0576d94da16e804bc41f20fadbaf68f9cd97d25f2f38d3d887b00c3be34f WatchSource:0}: Error finding container 552c0576d94da16e804bc41f20fadbaf68f9cd97d25f2f38d3d887b00c3be34f: Status 404 returned error can't find the container with id 552c0576d94da16e804bc41f20fadbaf68f9cd97d25f2f38d3d887b00c3be34f Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.950416 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.971178 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:25 crc kubenswrapper[4928]: W1122 00:06:25.979091 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd1144bf_61e9_4de0_a360_d6252730bbf3.slice/crio-7465c10999adae21936b3b0a001e13ac4f61f56f765bf54fb5a12fbea1159195 WatchSource:0}: Error finding container 7465c10999adae21936b3b0a001e13ac4f61f56f765bf54fb5a12fbea1159195: Status 404 returned error can't find the container with id 7465c10999adae21936b3b0a001e13ac4f61f56f765bf54fb5a12fbea1159195 Nov 22 00:06:25 crc kubenswrapper[4928]: I1122 00:06:25.990740 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.008115 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.017546 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.017582 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.017594 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.017613 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.017626 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.021220 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c474173e-56bf-467c-84e6-e473d89563c4-multus-daemon-config\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.021380 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/897ce64d-0600-4c6a-b684-0c1683fa14bc-proxy-tls\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.022608 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-os-release\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.022755 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-kubelet\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.022949 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-cni-bin\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.023068 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-etc-kubernetes\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.023193 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c474173e-56bf-467c-84e6-e473d89563c4-cni-binary-copy\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.023300 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-cni-multus\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.023420 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-multus-certs\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.023521 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-system-cni-dir\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.023610 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-os-release\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.023707 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-cni-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.022787 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-kubelet\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.022761 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-os-release\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024030 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-cni-bin\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.022225 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c474173e-56bf-467c-84e6-e473d89563c4-multus-daemon-config\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024080 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-etc-kubernetes\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024114 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-multus-certs\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024349 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnb6x\" (UniqueName: \"kubernetes.io/projected/897ce64d-0600-4c6a-b684-0c1683fa14bc-kube-api-access-tnb6x\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024473 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-conf-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024588 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-system-cni-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024703 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-cni-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024658 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-var-lib-cni-multus\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024758 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-system-cni-dir\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024823 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-os-release\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024856 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-conf-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025031 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-k8s-cni-cncf-io\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025137 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-cnibin\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025255 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3487b13a-0c53-417a-a73c-4709af1f7a10-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025374 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/897ce64d-0600-4c6a-b684-0c1683fa14bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025471 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-socket-dir-parent\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025581 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hpnj\" (UniqueName: \"kubernetes.io/projected/c474173e-56bf-467c-84e6-e473d89563c4-kube-api-access-6hpnj\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025693 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-cnibin\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025180 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-k8s-cni-cncf-io\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.024624 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c474173e-56bf-467c-84e6-e473d89563c4-cni-binary-copy\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025159 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-system-cni-dir\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025201 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-cnibin\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.026256 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-multus-socket-dir-parent\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.026401 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3487b13a-0c53-417a-a73c-4709af1f7a10-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.026378 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.026465 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-cnibin\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.026582 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-hostroot\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.027216 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/897ce64d-0600-4c6a-b684-0c1683fa14bc-proxy-tls\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.025785 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-hostroot\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.027503 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.027743 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3487b13a-0c53-417a-a73c-4709af1f7a10-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.027918 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/897ce64d-0600-4c6a-b684-0c1683fa14bc-rootfs\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.028030 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/897ce64d-0600-4c6a-b684-0c1683fa14bc-rootfs\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.028039 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjp9g\" (UniqueName: \"kubernetes.io/projected/3487b13a-0c53-417a-a73c-4709af1f7a10-kube-api-access-gjp9g\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.028246 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-netns\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.028250 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3487b13a-0c53-417a-a73c-4709af1f7a10-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.028280 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c474173e-56bf-467c-84e6-e473d89563c4-host-run-netns\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.030422 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/897ce64d-0600-4c6a-b684-0c1683fa14bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.041419 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3487b13a-0c53-417a-a73c-4709af1f7a10-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.043195 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnb6x\" (UniqueName: \"kubernetes.io/projected/897ce64d-0600-4c6a-b684-0c1683fa14bc-kube-api-access-tnb6x\") pod \"machine-config-daemon-hp7cp\" (UID: \"897ce64d-0600-4c6a-b684-0c1683fa14bc\") " pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.046227 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.049868 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjp9g\" (UniqueName: \"kubernetes.io/projected/3487b13a-0c53-417a-a73c-4709af1f7a10-kube-api-access-gjp9g\") pod \"multus-additional-cni-plugins-kx2zm\" (UID: \"3487b13a-0c53-417a-a73c-4709af1f7a10\") " pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.056429 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hpnj\" (UniqueName: \"kubernetes.io/projected/c474173e-56bf-467c-84e6-e473d89563c4-kube-api-access-6hpnj\") pod \"multus-xmk2r\" (UID: \"c474173e-56bf-467c-84e6-e473d89563c4\") " pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.059211 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.071535 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.078059 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.085776 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.096833 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.109718 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.120959 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.120996 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.121007 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.121028 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.121041 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.123304 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.128848 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.128976 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.129012 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.129091 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.129146 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:27.129127708 +0000 UTC m=+22.417158899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.129199 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:06:27.12919195 +0000 UTC m=+22.417223141 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.129258 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.129277 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:27.129272382 +0000 UTC m=+22.417303573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.162915 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.174269 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.174760 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.189363 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.190740 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xmk2r" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.203348 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.214635 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nchmx"] Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.215560 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.226691 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.227645 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.227932 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.228059 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.228175 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.228296 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.228398 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.228517 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.231699 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.231780 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.231922 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.231942 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.231947 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.231953 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.231969 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.231983 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.232016 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:27.231995533 +0000 UTC m=+22.520026724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:26 crc kubenswrapper[4928]: E1122 00:06:26.232046 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:27.232025053 +0000 UTC m=+22.520056334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.242165 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.242364 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.242406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.242415 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.242459 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.242470 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.258980 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.274980 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.286228 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.298714 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.308771 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.319287 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333104 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-systemd-units\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333159 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333298 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-script-lib\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333333 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-netns\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333386 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-var-lib-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333411 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-etc-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333476 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74sb5\" (UniqueName: \"kubernetes.io/projected/31460a9d-62d5-4dae-bb57-4db45544352d-kube-api-access-74sb5\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333498 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-ovn\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.333668 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-netd\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.334347 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.334556 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-node-log\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.334616 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-config\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.334643 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31460a9d-62d5-4dae-bb57-4db45544352d-ovn-node-metrics-cert\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.334681 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-slash\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.334702 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.335001 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-systemd\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.335043 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.335093 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-kubelet\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.335112 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-bin\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.335144 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-env-overrides\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.335214 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-log-socket\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.346399 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.349617 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.349653 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.349663 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.349681 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.349694 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.357488 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.374443 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.387748 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.398604 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.416269 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.434740 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437147 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-systemd-units\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437205 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437240 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-script-lib\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437269 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-netns\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437292 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-var-lib-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437314 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-etc-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437356 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74sb5\" (UniqueName: \"kubernetes.io/projected/31460a9d-62d5-4dae-bb57-4db45544352d-kube-api-access-74sb5\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437382 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-ovn\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437413 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-netd\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437438 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-node-log\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437475 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-config\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437505 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31460a9d-62d5-4dae-bb57-4db45544352d-ovn-node-metrics-cert\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437530 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-slash\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437561 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437586 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-systemd\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437627 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437652 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-kubelet\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437672 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-bin\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437693 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-env-overrides\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.437724 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-log-socket\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438148 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-node-log\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438260 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-etc-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438413 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-var-lib-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438522 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-systemd-units\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438573 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438710 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-ovn\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438750 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-netd\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.438782 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-systemd\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439340 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-netns\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439431 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-slash\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439515 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-script-lib\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439577 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-openvswitch\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439626 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-bin\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439640 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-config\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439676 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439693 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-kubelet\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.439733 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-log-socket\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.440164 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-env-overrides\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.453578 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31460a9d-62d5-4dae-bb57-4db45544352d-ovn-node-metrics-cert\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.457312 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.468844 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.468873 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.468881 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.468898 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.468908 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.474633 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.485560 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74sb5\" (UniqueName: \"kubernetes.io/projected/31460a9d-62d5-4dae-bb57-4db45544352d-kube-api-access-74sb5\") pod \"ovnkube-node-nchmx\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.542233 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.572542 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.572580 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.572589 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.572607 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.572618 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.674654 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.674708 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.674721 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.674742 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.674760 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.763818 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dsr54" event={"ID":"dd1144bf-61e9-4de0-a360-d6252730bbf3","Type":"ContainerStarted","Data":"fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.763890 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dsr54" event={"ID":"dd1144bf-61e9-4de0-a360-d6252730bbf3","Type":"ContainerStarted","Data":"7465c10999adae21936b3b0a001e13ac4f61f56f765bf54fb5a12fbea1159195"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.766187 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerStarted","Data":"18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.766273 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerStarted","Data":"fc1eea45656241065d0c5e783f8c8ee06a7ace4f89453373a254f4696a698704"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.767432 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fbce103d4abf8db49c88d8b11916012878aca30c719c1181df4ba114af379f59"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.769857 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.769917 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.769939 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b2652f412b1f665969693760b9c4e8ecebb48f73046256b1b7ac0b0e57293521"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.772312 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.774933 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.775687 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.776160 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"93763d03266f4b3e359e3786cdcc3aceb4bc652751659120ad138f35c8791243"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.777056 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.777091 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.777104 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.777123 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.777137 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.777702 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerStarted","Data":"6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.777756 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerStarted","Data":"33977ab5bfaafcfa69f74b5b96acf9ef0859e5b44e7c95731635908860bd7de6"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.780130 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.780163 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.780175 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"af5510a23c78e4ed8fb27498455e20e31f79b324f1f8e098863eb4d29ddfe4f1"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.782960 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.782990 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"552c0576d94da16e804bc41f20fadbaf68f9cd97d25f2f38d3d887b00c3be34f"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.788534 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.801943 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.812268 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.820230 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.829811 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.843502 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.848751 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.852947 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.856559 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.858278 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.868934 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.880219 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.880269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.880283 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.880302 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.880314 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.882857 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.894607 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.909834 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.930587 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.960971 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.983294 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.983340 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.983351 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.983370 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.983382 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:26Z","lastTransitionTime":"2025-11-22T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:26 crc kubenswrapper[4928]: I1122 00:06:26.998407 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.041508 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.078240 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.086195 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.086230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.086244 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.086263 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.086276 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.123624 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.145474 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.145631 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.145713 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.145780 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.145924 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.145927 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:29.14589354 +0000 UTC m=+24.433924761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.146024 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:29.145985543 +0000 UTC m=+24.434016734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.146068 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:06:29.146050344 +0000 UTC m=+24.434081565 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.155735 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.189018 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.189057 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.189075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.189091 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.189101 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.198831 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.240730 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.247076 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.247139 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247271 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247287 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247299 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247317 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247354 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:29.247338085 +0000 UTC m=+24.535369276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247353 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247381 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.247457 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:29.247436377 +0000 UTC m=+24.535467588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.277309 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.292067 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.292120 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.292134 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.292156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.292170 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.320014 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.357935 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.395110 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.395166 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.395179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.395200 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.395214 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.397436 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.435706 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.498152 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.498202 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.498216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.498236 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.498250 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.601220 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.601259 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.601269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.601285 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.601298 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.617603 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.617615 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.617762 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.617702 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.617924 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:27 crc kubenswrapper[4928]: E1122 00:06:27.617929 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.622622 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.624101 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.626343 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.627607 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.628470 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.630053 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.630881 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.632360 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.633120 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.635307 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.636009 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.704419 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.704452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.704461 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.704476 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.704485 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.787201 4928 generic.go:334] "Generic (PLEG): container finished" podID="3487b13a-0c53-417a-a73c-4709af1f7a10" containerID="6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a" exitCode=0 Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.787368 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerDied","Data":"6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.794555 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0" exitCode=0 Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.794762 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.810348 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.810406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.810423 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.810471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.810490 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.828362 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.843659 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.860383 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.872192 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.888328 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.907513 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.915288 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.915339 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.915364 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.915384 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.915397 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:27Z","lastTransitionTime":"2025-11-22T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.920284 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.959444 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:27 crc kubenswrapper[4928]: I1122 00:06:27.994205 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:27Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.012156 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.018256 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.018307 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.018326 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.018352 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.018365 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.032752 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.048001 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.070147 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.081352 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.094051 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.109996 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.121850 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.121892 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.121901 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.121918 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.121927 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.131596 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.168829 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.203206 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.225029 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.225187 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.225255 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.225332 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.225419 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.239748 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.287638 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.326723 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.327856 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.328019 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.328114 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.328188 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.328252 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.364365 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.402317 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.431698 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.431747 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.431764 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.431782 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.431812 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.437835 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.477590 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.533932 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.533978 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.533988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.534010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.534025 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.635986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.636038 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.636050 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.636068 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.636080 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.738906 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.738968 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.738984 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.739021 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.739077 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.800724 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.800784 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.800815 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.800828 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.800840 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.802982 4928 generic.go:334] "Generic (PLEG): container finished" podID="3487b13a-0c53-417a-a73c-4709af1f7a10" containerID="506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3" exitCode=0 Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.803047 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerDied","Data":"506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.805385 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.821833 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.836247 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.848346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.848398 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.848419 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.848447 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.848458 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.852516 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.868909 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.889057 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.903270 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.917293 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.932814 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.944711 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.950995 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.951032 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.951041 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.951057 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.951069 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:28Z","lastTransitionTime":"2025-11-22T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.960197 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.975928 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:28 crc kubenswrapper[4928]: I1122 00:06:28.989708 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:28Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.012397 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.040161 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.054263 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.054346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.054367 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.054394 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.054414 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.077231 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.118043 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.155138 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lwwmp"] Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.155850 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.156860 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.156957 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.156986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.157038 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.157071 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.160310 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.169572 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.169595 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.169729 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.169810 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:06:33.169749738 +0000 UTC m=+28.457780939 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.169881 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.169924 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.169939 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:33.169922212 +0000 UTC m=+28.457953403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.169976 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.170082 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:33.170074546 +0000 UTC m=+28.458105737 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.189058 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.209715 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.228881 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.260522 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.260585 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.260602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.260631 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.260647 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.270639 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/07bc9047-a464-4661-8a73-9d5919a4f15f-serviceca\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.270702 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.270764 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.270841 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07bc9047-a464-4661-8a73-9d5919a4f15f-host\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.270868 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbm2\" (UniqueName: \"kubernetes.io/projected/07bc9047-a464-4661-8a73-9d5919a4f15f-kube-api-access-4pbm2\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271096 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271152 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271157 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271211 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271236 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271175 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271422 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:33.271347326 +0000 UTC m=+28.559378557 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.271478 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:33.271462739 +0000 UTC m=+28.559493960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.276705 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.324352 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.363876 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.364622 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.364716 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.364741 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.364770 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.365017 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.371547 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/07bc9047-a464-4661-8a73-9d5919a4f15f-serviceca\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.371749 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07bc9047-a464-4661-8a73-9d5919a4f15f-host\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.371851 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbm2\" (UniqueName: \"kubernetes.io/projected/07bc9047-a464-4661-8a73-9d5919a4f15f-kube-api-access-4pbm2\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.371996 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07bc9047-a464-4661-8a73-9d5919a4f15f-host\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.373708 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/07bc9047-a464-4661-8a73-9d5919a4f15f-serviceca\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.396223 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.426917 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbm2\" (UniqueName: \"kubernetes.io/projected/07bc9047-a464-4661-8a73-9d5919a4f15f-kube-api-access-4pbm2\") pod \"node-ca-lwwmp\" (UID: \"07bc9047-a464-4661-8a73-9d5919a4f15f\") " pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.461280 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.468452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.468547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.468880 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.468942 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.468964 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.479070 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lwwmp" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.501556 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.540551 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.571309 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.571371 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.571394 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.571429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.571453 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.578948 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.617272 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.617272 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.617438 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.617619 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.617712 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:29 crc kubenswrapper[4928]: E1122 00:06:29.617757 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.627112 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.657225 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.674547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.674586 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.674596 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.674610 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.674620 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.697886 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.745730 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.777409 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.777470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.777486 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.777513 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.777529 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.780268 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.812861 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.814223 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lwwmp" event={"ID":"07bc9047-a464-4661-8a73-9d5919a4f15f","Type":"ContainerStarted","Data":"378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.814606 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lwwmp" event={"ID":"07bc9047-a464-4661-8a73-9d5919a4f15f","Type":"ContainerStarted","Data":"f70fbaecac109e9e9f31ce59c3d61af70387168537aa62390947704521794848"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.818253 4928 generic.go:334] "Generic (PLEG): container finished" podID="3487b13a-0c53-417a-a73c-4709af1f7a10" containerID="b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8" exitCode=0 Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.818337 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerDied","Data":"b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.818627 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.866139 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.880243 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.880289 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.880301 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.880322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.880335 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.901533 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.936520 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.977021 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:29Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.983418 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.983455 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.983468 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.983488 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:29 crc kubenswrapper[4928]: I1122 00:06:29.983502 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:29Z","lastTransitionTime":"2025-11-22T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.026041 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.057082 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.085974 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.086020 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.086069 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.086091 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.086106 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:30Z","lastTransitionTime":"2025-11-22T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.098655 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.137090 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.181223 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.188302 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.188347 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.188361 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.188379 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.188392 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:30Z","lastTransitionTime":"2025-11-22T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.217873 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.263107 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.291604 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.291649 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.291659 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.291677 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.291690 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:30Z","lastTransitionTime":"2025-11-22T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.299055 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.345676 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.377239 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.393955 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.394006 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.394015 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.394034 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.394045 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:30Z","lastTransitionTime":"2025-11-22T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.422013 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.456191 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.497174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.497218 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.497232 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.497250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.497260 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:30Z","lastTransitionTime":"2025-11-22T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.501047 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.537831 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.578242 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.600710 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.600765 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.600776 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.600810 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.600822 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:30Z","lastTransitionTime":"2025-11-22T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.622706 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.659786 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.701941 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.704223 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.704259 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.704274 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.704294 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.704306 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:30Z","lastTransitionTime":"2025-11-22T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:30 crc kubenswrapper[4928]: I1122 00:06:30.755843 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:30Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.206241 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.206288 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.206529 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.206552 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.206562 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.208771 4928 generic.go:334] "Generic (PLEG): container finished" podID="3487b13a-0c53-417a-a73c-4709af1f7a10" containerID="3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f" exitCode=0 Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.208880 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerDied","Data":"3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.228832 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.250228 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.267308 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.282392 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.301179 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.309542 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.309592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.309605 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.309625 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.309638 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.312578 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.335103 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.347865 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.375420 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.390873 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.403721 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.413322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.413377 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.413388 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.413404 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.413421 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.416961 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.431589 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.448034 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:31Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.517028 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.517069 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.517080 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.517097 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.517109 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.617657 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.617693 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.617689 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:31 crc kubenswrapper[4928]: E1122 00:06:31.617850 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:31 crc kubenswrapper[4928]: E1122 00:06:31.618115 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:31 crc kubenswrapper[4928]: E1122 00:06:31.619402 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.619843 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.619878 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.619887 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.619901 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.619912 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.723524 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.723575 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.723585 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.723607 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.723631 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.827122 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.827190 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.827208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.827314 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.827335 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.930347 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.930429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.930451 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.930483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:31 crc kubenswrapper[4928]: I1122 00:06:31.930507 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:31Z","lastTransitionTime":"2025-11-22T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.033621 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.033657 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.033665 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.033681 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.033692 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.136499 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.136542 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.136556 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.136573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.136587 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.216057 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.218662 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerStarted","Data":"1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.238416 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.238462 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.238471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.238490 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.238500 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.241397 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.252935 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.269831 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.284549 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.299332 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.311556 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.324446 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.339716 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.341678 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.341729 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.341739 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.341759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.341839 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.360876 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.382200 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.396225 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.410220 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.421259 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.431327 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:32Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.445211 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.445244 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.445254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.445271 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.445282 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.549707 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.549774 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.549783 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.549832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.549843 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.652261 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.652316 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.652328 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.652348 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.652360 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.755074 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.755130 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.755143 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.755166 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.755182 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.857348 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.857424 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.857452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.857485 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.857517 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.960531 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.960594 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.960617 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.960640 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:32 crc kubenswrapper[4928]: I1122 00:06:32.960659 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:32Z","lastTransitionTime":"2025-11-22T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.062684 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.062750 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.062759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.062776 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.062800 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.165598 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.165698 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.165709 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.165729 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.165741 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.221750 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.221935 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.221975 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.222114 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.222113 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:06:41.222062077 +0000 UTC m=+36.510093308 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.222128 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.222175 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:41.22215402 +0000 UTC m=+36.510185221 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.222377 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:41.222260373 +0000 UTC m=+36.510291664 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.225675 4928 generic.go:334] "Generic (PLEG): container finished" podID="3487b13a-0c53-417a-a73c-4709af1f7a10" containerID="1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17" exitCode=0 Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.225702 4928 generic.go:334] "Generic (PLEG): container finished" podID="3487b13a-0c53-417a-a73c-4709af1f7a10" containerID="43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726" exitCode=0 Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.225724 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerDied","Data":"1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.225758 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerDied","Data":"43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.239844 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.256483 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.268258 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.268330 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.268350 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.268378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.268398 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.274185 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.294656 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.312008 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.323081 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.323206 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.323481 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.323507 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.323523 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.323568 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:41.323551923 +0000 UTC m=+36.611583114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.323486 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.324034 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.324091 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.324258 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:41.324231083 +0000 UTC m=+36.612262284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.325989 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.348007 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.362237 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.372986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.373184 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.373213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.373244 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.373283 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.375305 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.387265 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.400736 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.413089 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.427108 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.442467 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:33Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.476235 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.476297 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.476315 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.476338 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.476355 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.580255 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.580365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.580398 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.580433 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.580473 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.618468 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.618530 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.618682 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.618823 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.618951 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:33 crc kubenswrapper[4928]: E1122 00:06:33.619137 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.683545 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.683601 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.683612 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.683632 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.683643 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.787512 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.787570 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.787582 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.787603 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.787617 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.891142 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.891228 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.891246 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.891269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.891307 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.995176 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.995251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.995288 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.995315 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:33 crc kubenswrapper[4928]: I1122 00:06:33.995332 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:33Z","lastTransitionTime":"2025-11-22T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.099119 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.099160 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.099175 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.099192 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.099205 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.202343 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.202399 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.202414 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.202435 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.202450 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.236374 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.237067 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.237126 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.245718 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" event={"ID":"3487b13a-0c53-417a-a73c-4709af1f7a10","Type":"ContainerStarted","Data":"5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.255631 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.270751 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.278583 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.280529 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.287363 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.298820 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.305903 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.306299 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.306393 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.306499 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.306706 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.321382 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.341146 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.357518 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.372298 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.384640 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.409499 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.409560 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.409573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.409591 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.409601 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.413922 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.431669 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.448389 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.464037 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.480199 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.496949 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.511185 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.512823 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.512872 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.512884 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.512902 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.512916 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.527749 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.543507 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.557351 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.574287 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.598832 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.615695 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.615747 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.615760 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.615783 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.615818 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.620409 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.637648 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.658838 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.675191 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.694016 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.705774 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.718665 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:34Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.719261 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.719313 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.719327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.719350 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.719364 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.822011 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.822110 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.822177 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.822218 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.822245 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.924970 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.925046 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.925069 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.925101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:34 crc kubenswrapper[4928]: I1122 00:06:34.925127 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:34Z","lastTransitionTime":"2025-11-22T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.028208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.028262 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.028275 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.028296 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.028311 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.131018 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.131268 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.131277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.131292 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.131301 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.234277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.234332 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.234348 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.234365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.234376 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.249155 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.339016 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.339073 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.339092 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.339122 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.339140 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.441552 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.441638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.441666 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.441705 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.441731 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.544461 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.544505 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.544518 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.544537 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.544550 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.562917 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.562969 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.562983 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.563002 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.563015 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.573631 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.577632 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.577672 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.577683 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.577701 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.577711 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.595110 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.598678 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.598707 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.598715 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.598731 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.598741 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.609647 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.614822 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.614887 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.614908 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.614934 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.614951 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.617537 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.617581 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.617590 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.617657 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.617749 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.617867 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.630295 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.632747 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.643151 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.643251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.643266 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.643286 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.643301 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.644068 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.657101 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: E1122 00:06:35.657229 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.661471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.661499 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.661508 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.661521 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.661531 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.663930 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.673612 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.684533 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.694147 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.710056 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.720584 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.735178 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.746925 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.758668 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.763618 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.763664 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.763676 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.763694 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.763708 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.768109 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.778552 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.790156 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.870036 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.870085 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.870103 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.870127 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.870144 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.972787 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.972924 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.972937 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.972960 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:35 crc kubenswrapper[4928]: I1122 00:06:35.972975 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:35Z","lastTransitionTime":"2025-11-22T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.076368 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.076432 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.076454 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.076481 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.076501 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.179147 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.179196 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.179205 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.179223 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.179236 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.251763 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.281268 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.281325 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.281341 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.281361 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.281375 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.384003 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.384056 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.384072 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.384092 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.384104 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.486311 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.486398 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.486413 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.486436 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.486454 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.589772 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.589836 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.589848 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.589876 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.589888 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.693381 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.693475 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.693504 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.693544 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.693573 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.797215 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.797271 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.797283 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.797303 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.797313 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.901005 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.901100 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.901113 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.901156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:36 crc kubenswrapper[4928]: I1122 00:06:36.901171 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:36Z","lastTransitionTime":"2025-11-22T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.004835 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.004876 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.004886 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.004900 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.004910 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.107970 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.108048 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.108075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.108107 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.108132 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.211448 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.211522 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.211559 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.211592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.211614 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.257740 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/0.log" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.263665 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f" exitCode=1 Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.263752 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.264924 4928 scope.go:117] "RemoveContainer" containerID="82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.309903 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.316302 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.316392 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.316423 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.316459 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.316485 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.329771 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.352271 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.373099 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.398564 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.416340 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.420338 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.420373 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.420385 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.420402 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.420413 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.438323 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.458034 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.476158 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.492401 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.504678 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.522849 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.522912 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.522931 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.522962 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.522982 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.523093 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.538409 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.552157 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:37Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.617720 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.617757 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.617725 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:37 crc kubenswrapper[4928]: E1122 00:06:37.617872 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:37 crc kubenswrapper[4928]: E1122 00:06:37.618031 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:37 crc kubenswrapper[4928]: E1122 00:06:37.618094 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.625171 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.625252 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.625267 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.625284 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.625295 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.727885 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.727929 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.727941 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.727958 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.727969 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.830865 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.830913 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.830927 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.830948 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.830961 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.933775 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.933834 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.933846 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.933865 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:37 crc kubenswrapper[4928]: I1122 00:06:37.933879 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:37Z","lastTransitionTime":"2025-11-22T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.036665 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.036704 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.036720 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.036737 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.036746 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.140015 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.140078 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.140095 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.140119 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.140138 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.242250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.242304 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.242318 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.242340 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.242354 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.268873 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/0.log" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.271210 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.271386 4928 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.281170 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.292248 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.310951 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.328737 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.344194 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.344227 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.344236 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.344251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.344262 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.349712 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.364179 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.376762 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.389453 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.401215 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.419755 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.434349 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.445336 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.446815 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.446842 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.446851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.446864 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.446872 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.458237 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.470858 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.549253 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.549300 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.549309 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.549327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.549342 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.651690 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.651742 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.651756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.651778 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.651814 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.754939 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.754984 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.754993 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.755010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.755019 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.858947 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.859019 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.859039 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.859069 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.859092 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.962677 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.962776 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.962827 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.962860 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:38 crc kubenswrapper[4928]: I1122 00:06:38.962891 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:38Z","lastTransitionTime":"2025-11-22T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.065232 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.065320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.065342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.065375 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.065395 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.168419 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.168468 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.168483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.168502 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.168526 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.229322 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb"] Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.230042 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.233104 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.233287 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.251245 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.270391 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.271535 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.271577 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.271594 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.271623 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.271643 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.278345 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/1.log" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.280637 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/0.log" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.285238 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2" exitCode=1 Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.285298 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.285370 4928 scope.go:117] "RemoveContainer" containerID="82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.286711 4928 scope.go:117] "RemoveContainer" containerID="f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2" Nov 22 00:06:39 crc kubenswrapper[4928]: E1122 00:06:39.287170 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.296838 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.297384 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4s7\" (UniqueName: \"kubernetes.io/projected/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-kube-api-access-wp4s7\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.297424 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.297482 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.300715 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.320519 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.338630 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.358686 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.375085 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.375136 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.375148 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.375174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.375190 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.378259 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.396022 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.398818 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.398897 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.398930 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp4s7\" (UniqueName: \"kubernetes.io/projected/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-kube-api-access-wp4s7\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.399005 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.400112 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.400544 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.408899 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.416994 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.419564 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp4s7\" (UniqueName: \"kubernetes.io/projected/dbe5fdda-8d76-4a8d-8a8d-d25be738257a-kube-api-access-wp4s7\") pod \"ovnkube-control-plane-749d76644c-9g8jb\" (UID: \"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.439054 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.460183 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.477853 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.477911 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.477931 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.477958 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.477977 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.480245 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.501638 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.517433 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.544274 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.545471 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.566032 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: W1122 00:06:39.567183 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe5fdda_8d76_4a8d_8a8d_d25be738257a.slice/crio-a0eead44c5b880acafd049d1cf8ab906375c1fc97875bc5ea81b244a5378b496 WatchSource:0}: Error finding container a0eead44c5b880acafd049d1cf8ab906375c1fc97875bc5ea81b244a5378b496: Status 404 returned error can't find the container with id a0eead44c5b880acafd049d1cf8ab906375c1fc97875bc5ea81b244a5378b496 Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.580561 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.580593 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.580604 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.580621 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.580634 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.590760 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.607446 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.618011 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:39 crc kubenswrapper[4928]: E1122 00:06:39.618179 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.618647 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:39 crc kubenswrapper[4928]: E1122 00:06:39.618744 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.618848 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:39 crc kubenswrapper[4928]: E1122 00:06:39.618896 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.628918 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.643779 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.661926 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.681844 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.683990 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.684024 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.684036 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.684077 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.684088 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.699925 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.715371 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.734209 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.757322 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.775467 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.787196 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.787261 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.787385 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.787424 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.787447 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.798369 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.815629 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.854515 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:39Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.889972 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.890011 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.890021 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.890039 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.890049 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.993145 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.993193 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.993203 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.993219 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:39 crc kubenswrapper[4928]: I1122 00:06:39.993228 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:39Z","lastTransitionTime":"2025-11-22T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.096210 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.096295 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.096315 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.096346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.096376 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.199299 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.199366 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.199389 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.199417 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.199438 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.297579 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" event={"ID":"dbe5fdda-8d76-4a8d-8a8d-d25be738257a","Type":"ContainerStarted","Data":"ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.297654 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" event={"ID":"dbe5fdda-8d76-4a8d-8a8d-d25be738257a","Type":"ContainerStarted","Data":"87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.297675 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" event={"ID":"dbe5fdda-8d76-4a8d-8a8d-d25be738257a","Type":"ContainerStarted","Data":"a0eead44c5b880acafd049d1cf8ab906375c1fc97875bc5ea81b244a5378b496"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.302510 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.302572 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.302592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.302617 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.302642 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.302752 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/1.log" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.325469 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.351392 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.351718 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kx6zl"] Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.352342 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:40 crc kubenswrapper[4928]: E1122 00:06:40.352422 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.376720 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.392677 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.405163 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.405230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.405254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.405285 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.405305 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.411734 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.412285 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.412376 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7wl\" (UniqueName: \"kubernetes.io/projected/b3aa30f0-2a7f-4b60-b254-3e894a581f89-kube-api-access-pq7wl\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.431765 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.457593 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.473907 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.492617 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.508915 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.508973 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.508988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.509007 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.509021 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.513602 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.513780 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7wl\" (UniqueName: \"kubernetes.io/projected/b3aa30f0-2a7f-4b60-b254-3e894a581f89-kube-api-access-pq7wl\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:40 crc kubenswrapper[4928]: E1122 00:06:40.513829 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:40 crc kubenswrapper[4928]: E1122 00:06:40.514047 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:41.01402631 +0000 UTC m=+36.302057501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.514681 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.530070 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.544604 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7wl\" (UniqueName: \"kubernetes.io/projected/b3aa30f0-2a7f-4b60-b254-3e894a581f89-kube-api-access-pq7wl\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.550712 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.567315 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.592426 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.612130 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.612174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.612186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.612205 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.612219 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.620286 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.640492 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.661518 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.682861 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.699691 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.715009 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.715640 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.715686 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.715700 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.715718 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.715730 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.735642 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.755151 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.768019 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.779750 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.795203 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.811081 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.819053 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.819134 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.819149 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.819174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.819190 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.828923 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.845364 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.864677 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.887692 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.902059 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:40Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.922417 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.922462 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.922477 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.922498 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:40 crc kubenswrapper[4928]: I1122 00:06:40.922511 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:40Z","lastTransitionTime":"2025-11-22T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.019505 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.019684 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.019821 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:42.019772023 +0000 UTC m=+37.307803204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.024756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.024808 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.024822 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.024843 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.024855 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.127632 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.127676 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.127687 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.127704 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.127716 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.222124 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.222255 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:06:57.222229561 +0000 UTC m=+52.510260762 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.222559 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.222599 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.222670 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.222754 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:57.222742194 +0000 UTC m=+52.510773395 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.222760 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.222826 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:57.222813176 +0000 UTC m=+52.510844367 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.231144 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.231213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.231230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.231253 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.231267 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.323655 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.324027 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.324096 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.324122 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.324258 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:57.32422049 +0000 UTC m=+52.612251861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.334751 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.334868 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.334888 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.334916 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.334946 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.424713 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.425020 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.425067 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.425082 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.425158 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:57.42513254 +0000 UTC m=+52.713163731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.438213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.438262 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.438274 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.438293 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.438304 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.542418 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.542489 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.542514 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.542545 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.542566 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.618334 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.618422 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.618361 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.618533 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.618674 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.618861 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.619010 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:41 crc kubenswrapper[4928]: E1122 00:06:41.619226 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.645421 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.645471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.645483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.645499 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.645510 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.748747 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.749498 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.749517 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.749547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.749564 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.852285 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.852536 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.852676 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.852780 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.852911 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.956509 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.957093 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.957265 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.957444 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:41 crc kubenswrapper[4928]: I1122 00:06:41.957584 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:41Z","lastTransitionTime":"2025-11-22T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.031883 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:42 crc kubenswrapper[4928]: E1122 00:06:42.032208 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:42 crc kubenswrapper[4928]: E1122 00:06:42.032341 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:44.032304168 +0000 UTC m=+39.320335399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.061115 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.061365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.061523 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.061708 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.061934 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.166260 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.166333 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.166351 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.166381 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.166399 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.269725 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.269832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.269860 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.269898 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.269921 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.372868 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.372943 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.372970 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.373003 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.373031 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.476528 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.476632 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.476673 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.476712 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.476755 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.579163 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.579213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.579225 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.579243 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.579253 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.682066 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.682114 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.682126 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.682149 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.682166 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.784905 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.784953 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.784967 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.784986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.784998 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.888030 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.888077 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.888090 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.888107 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.888115 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.990268 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.990327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.990346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.990370 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:42 crc kubenswrapper[4928]: I1122 00:06:42.990387 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:42Z","lastTransitionTime":"2025-11-22T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.093228 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.093320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.093342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.093371 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.093392 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.196176 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.196224 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.196236 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.196254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.196267 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.299617 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.299662 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.299671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.299688 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.299698 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.402781 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.403510 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.403548 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.403582 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.403609 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.506369 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.506433 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.506452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.506483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.506501 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.544342 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.571782 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.585406 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.601377 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.609411 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.609458 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.609470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.609488 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.609500 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.617986 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.618035 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.618148 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.618329 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:43 crc kubenswrapper[4928]: E1122 00:06:43.618326 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:43 crc kubenswrapper[4928]: E1122 00:06:43.618536 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:43 crc kubenswrapper[4928]: E1122 00:06:43.618627 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:43 crc kubenswrapper[4928]: E1122 00:06:43.618719 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.625038 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.637615 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.651814 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.664089 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.676871 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.696394 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.710998 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.712869 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.712903 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.712913 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.712928 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.712938 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.724925 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.737638 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.751220 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.765768 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.777249 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.787715 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:43Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.815973 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.816042 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.816057 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.816080 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.816097 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.919005 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.919077 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.919101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.919131 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:43 crc kubenswrapper[4928]: I1122 00:06:43.919150 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:43Z","lastTransitionTime":"2025-11-22T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.022027 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.022105 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.022118 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.022139 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.022151 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.055126 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:44 crc kubenswrapper[4928]: E1122 00:06:44.055282 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:44 crc kubenswrapper[4928]: E1122 00:06:44.055362 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:48.055341512 +0000 UTC m=+43.343372713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.124459 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.124496 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.124507 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.124524 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.124537 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.227972 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.228028 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.228044 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.228066 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.228081 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.330408 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.330451 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.330464 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.330482 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.330496 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.433993 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.434072 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.434092 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.434120 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.434138 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.537513 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.537578 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.537630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.537662 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.537685 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.640661 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.640727 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.640749 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.640781 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.640845 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.743885 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.743960 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.743982 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.744012 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.744030 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.847206 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.847287 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.847318 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.847349 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.847374 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.950693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.950756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.950775 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.950828 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:44 crc kubenswrapper[4928]: I1122 00:06:44.950848 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:44Z","lastTransitionTime":"2025-11-22T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.053616 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.053676 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.053692 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.053716 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.053731 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.157238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.157291 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.157305 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.157329 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.157344 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.260609 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.260689 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.260718 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.260749 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.260776 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.363292 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.363371 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.363390 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.363420 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.363439 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.466020 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.466104 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.466130 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.466188 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.466217 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.569497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.569976 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.570154 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.570346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.570574 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.617964 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.618045 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.618142 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.618246 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.618362 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.618590 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.618724 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.618874 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.639625 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.654093 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.672326 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.673601 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.673655 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.673670 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.673692 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.673709 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.690597 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.702736 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.708839 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.708985 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.709047 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.709125 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.709216 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.715743 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.726350 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.729855 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.730623 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.730653 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.730666 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.730688 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.730703 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.747143 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.749741 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.753478 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.753547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.753563 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.753583 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.753597 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.766858 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.769996 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.774023 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.774047 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.774058 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.774074 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.774083 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.790507 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.793364 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.798470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.798518 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.798536 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.798556 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.798568 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.810682 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: E1122 00:06:45.810820 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.810958 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cdd7a658ab7af671ad8ac93fe4cd4fe3a68d96a3dfc000f6b44ab134a0337f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:36Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 00:06:36.809055 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809316 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809580 6235 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809600 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809628 6235 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 00:06:36.809634 6235 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 00:06:36.809662 6235 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 00:06:36.809713 6235 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 00:06:36.809727 6235 factory.go:656] Stopping watch factory\\\\nI1122 00:06:36.809739 6235 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 00:06:36.809745 6235 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 00:06:36.809751 6235 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.812158 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.812189 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.812200 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.812215 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.812225 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.821884 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.834351 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.846785 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.858041 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.868028 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.915411 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.915450 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.915462 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.915482 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:45 crc kubenswrapper[4928]: I1122 00:06:45.915494 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:45Z","lastTransitionTime":"2025-11-22T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.018170 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.018212 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.018227 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.018246 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.018258 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.121380 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.121423 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.121433 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.121452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.121461 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.223588 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.223622 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.223632 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.223650 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.223665 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.326588 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.326633 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.326643 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.326659 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.326668 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.429463 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.429526 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.429544 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.429568 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.429586 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.532582 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.532625 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.532634 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.532651 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.532661 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.635080 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.635126 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.635137 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.635153 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.635165 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.737759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.737827 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.737841 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.737861 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.737873 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.840484 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.840520 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.840530 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.840545 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.840554 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.942537 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.942581 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.942592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.942610 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:46 crc kubenswrapper[4928]: I1122 00:06:46.942621 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:46Z","lastTransitionTime":"2025-11-22T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.044294 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.044355 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.044372 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.044397 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.044412 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.147041 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.147110 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.147122 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.147139 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.147150 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.249742 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.249778 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.249801 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.249817 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.249832 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.352988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.353036 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.353047 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.353066 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.353098 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.455946 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.455993 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.456009 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.456030 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.456043 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.558429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.558461 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.558471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.558485 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.558495 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.617484 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.617524 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.617526 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.617571 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:47 crc kubenswrapper[4928]: E1122 00:06:47.617672 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:47 crc kubenswrapper[4928]: E1122 00:06:47.617922 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:47 crc kubenswrapper[4928]: E1122 00:06:47.618065 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:47 crc kubenswrapper[4928]: E1122 00:06:47.618143 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.661137 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.661179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.661191 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.661209 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.661221 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.764397 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.764463 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.764483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.764510 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.764528 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.866654 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.866711 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.866723 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.866746 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.866764 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.969705 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.969754 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.969775 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.969845 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:47 crc kubenswrapper[4928]: I1122 00:06:47.969864 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:47Z","lastTransitionTime":"2025-11-22T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.073054 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.073124 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.073143 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.073168 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.073186 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.102728 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:48 crc kubenswrapper[4928]: E1122 00:06:48.102988 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:48 crc kubenswrapper[4928]: E1122 00:06:48.103058 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:06:56.103031675 +0000 UTC m=+51.391062886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.177520 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.177602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.177626 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.177659 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.177682 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.281420 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.281497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.281517 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.281544 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.281564 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.385657 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.385728 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.385752 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.385786 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.385846 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.488762 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.488896 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.488918 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.488948 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.488967 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.592437 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.592528 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.592554 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.592592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.592618 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.695627 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.695733 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.695752 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.695827 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.695858 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.800148 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.800250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.800278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.800317 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.800345 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.904455 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.904520 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.904536 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.904559 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:48 crc kubenswrapper[4928]: I1122 00:06:48.904574 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:48Z","lastTransitionTime":"2025-11-22T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.007423 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.007472 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.007483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.007501 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.007514 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.110852 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.110903 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.110924 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.110951 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.110969 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.214352 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.214429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.214448 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.214477 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.214499 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.318270 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.318334 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.318354 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.318378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.318397 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.421250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.421335 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.421355 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.421384 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.421405 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.524686 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.524735 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.524752 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.524773 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.524787 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.618036 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.618072 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.618075 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:49 crc kubenswrapper[4928]: E1122 00:06:49.618279 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:49 crc kubenswrapper[4928]: E1122 00:06:49.618437 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.618515 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:49 crc kubenswrapper[4928]: E1122 00:06:49.618632 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:49 crc kubenswrapper[4928]: E1122 00:06:49.618889 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.627947 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.628004 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.628023 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.628053 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.628075 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.731139 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.731196 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.731215 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.731242 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.731261 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.834759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.835014 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.835081 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.835113 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.835168 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.938955 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.939103 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.939124 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.939154 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:49 crc kubenswrapper[4928]: I1122 00:06:49.939176 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:49Z","lastTransitionTime":"2025-11-22T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.042647 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.042715 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.042737 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.042764 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.042783 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.146445 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.146521 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.146539 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.146567 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.146589 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.249900 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.249971 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.249993 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.250016 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.250059 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.352742 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.352850 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.352881 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.352914 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.352938 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.457205 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.457309 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.457333 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.457370 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.457390 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.560863 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.560922 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.560938 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.560963 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.560982 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.631424 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.632928 4928 scope.go:117] "RemoveContainer" containerID="f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.664469 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.664519 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.664531 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.664550 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.664562 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.665666 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.683089 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.701375 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.722149 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.745827 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.760941 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.767460 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.767520 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.767537 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.767561 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.767579 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.776004 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.790764 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.808749 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.821952 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.835324 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.846583 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.859287 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.870635 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.870844 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.870859 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.870868 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.870883 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.870893 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.879232 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.895085 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:50Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.973101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.973132 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.973140 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.973156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:50 crc kubenswrapper[4928]: I1122 00:06:50.973165 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:50Z","lastTransitionTime":"2025-11-22T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.077565 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.077608 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.077621 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.077637 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.077646 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.179782 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.179838 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.179848 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.179866 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.179876 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.282063 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.282105 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.282116 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.282133 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.282145 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.351942 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/1.log" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.354396 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.354936 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.370900 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.382696 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.384219 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.384273 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.384289 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.384312 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.384325 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.396530 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.411470 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.425510 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.442239 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.456088 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.477787 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.487203 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.487240 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.487255 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.487276 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.487290 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.493554 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.505750 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.523119 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.540314 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.558125 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.568515 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.578955 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.588870 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:51Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.590108 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.590156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.590168 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.590188 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.590201 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.617446 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.617514 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.617591 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:51 crc kubenswrapper[4928]: E1122 00:06:51.617719 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.617784 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:51 crc kubenswrapper[4928]: E1122 00:06:51.617898 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:51 crc kubenswrapper[4928]: E1122 00:06:51.617949 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:51 crc kubenswrapper[4928]: E1122 00:06:51.618118 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.693914 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.693984 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.694002 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.694024 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.694036 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.796557 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.796613 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.796627 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.796649 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.796663 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.898894 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.898927 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.898942 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.898976 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:51 crc kubenswrapper[4928]: I1122 00:06:51.898994 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:51Z","lastTransitionTime":"2025-11-22T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.001607 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.001638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.001648 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.001662 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.001672 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.103775 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.103852 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.103872 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.103899 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.103912 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.206829 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.206892 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.206911 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.206941 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.206958 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.310662 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.310716 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.310732 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.310758 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.310776 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.362011 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/2.log" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.363326 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/1.log" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.367709 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354" exitCode=1 Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.367766 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.367851 4928 scope.go:117] "RemoveContainer" containerID="f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.369151 4928 scope.go:117] "RemoveContainer" containerID="3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354" Nov 22 00:06:52 crc kubenswrapper[4928]: E1122 00:06:52.369565 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.387743 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.406124 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.413930 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.414294 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.414305 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.414319 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.414349 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.422295 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.436231 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.447704 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.458265 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.472265 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.483983 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.493823 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.506597 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.516787 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.516811 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.516845 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.516859 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.516867 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.524981 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f82defc75c77651960186b4f8462896b6a36fa37d0b0f1ac71562054e7edf4e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:38Z\\\",\\\"message\\\":\\\"ization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:06:38.633885 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633889 6375 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-nchmx in node crc\\\\nI1122 00:06:38.633895 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hp7cp\\\\nI1122 00:06:38.633899 6375 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 00:06:38.633901 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-nchmx after 0 failed attempt(s)\\\\nI1122 00:06:38.633908 6375 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-kx2zm\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.534350 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.543634 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.553617 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.564867 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.575942 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:52Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.619419 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.619456 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.619465 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.619481 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.619490 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.722354 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.722416 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.722426 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.722442 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.722454 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.825404 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.825460 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.825475 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.825495 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.825507 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.927756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.927829 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.927841 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.927860 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:52 crc kubenswrapper[4928]: I1122 00:06:52.927870 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:52Z","lastTransitionTime":"2025-11-22T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.031112 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.031155 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.031166 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.031186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.031198 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.134113 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.134161 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.134172 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.134190 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.134234 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.237085 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.237153 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.237164 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.237177 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.237186 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.340464 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.340605 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.340649 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.340752 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.340826 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.372942 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/2.log" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.377394 4928 scope.go:117] "RemoveContainer" containerID="3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354" Nov 22 00:06:53 crc kubenswrapper[4928]: E1122 00:06:53.377588 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.407344 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.419217 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.433977 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.444242 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.444270 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.444280 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.444310 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.444322 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.445216 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.461368 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.475255 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.487002 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.500943 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.521206 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.532749 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.546385 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.546443 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.546460 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.546482 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.546497 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.549948 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.559423 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.570215 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.581175 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.590995 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.600090 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:53Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.617742 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.617844 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.617883 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:53 crc kubenswrapper[4928]: E1122 00:06:53.617901 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:53 crc kubenswrapper[4928]: E1122 00:06:53.617961 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.618008 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:53 crc kubenswrapper[4928]: E1122 00:06:53.618146 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:53 crc kubenswrapper[4928]: E1122 00:06:53.618266 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.648871 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.648906 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.648917 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.648931 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.648942 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.751561 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.751599 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.751615 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.751637 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.751654 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.853700 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.853732 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.853742 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.853756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.853767 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.957427 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.957492 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.957514 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.957549 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:53 crc kubenswrapper[4928]: I1122 00:06:53.957573 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:53Z","lastTransitionTime":"2025-11-22T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.061082 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.061148 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.061172 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.061202 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.061226 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.164524 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.164564 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.164573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.164588 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.164597 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.267895 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.267953 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.267966 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.267988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.268000 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.370850 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.371171 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.371296 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.371419 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.371523 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.474473 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.474944 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.475077 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.475199 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.475332 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.579107 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.579472 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.579857 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.580163 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.580490 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.684049 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.684137 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.684158 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.684186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.684207 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.787817 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.787870 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.787882 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.787902 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.787917 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.890126 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.890179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.890191 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.890212 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.890224 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.992306 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.992345 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.992356 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.992375 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:54 crc kubenswrapper[4928]: I1122 00:06:54.992385 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:54Z","lastTransitionTime":"2025-11-22T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.094137 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.094190 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.094209 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.094234 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.094276 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.197262 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.197306 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.197317 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.197333 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.197342 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.300251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.300301 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.300320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.300344 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.300358 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.402482 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.402536 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.402603 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.402623 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.402636 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.506075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.506131 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.506147 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.506169 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.506182 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.609029 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.609070 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.609079 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.609095 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.609109 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.617603 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.617616 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.617655 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.617760 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.617807 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.617885 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.617913 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.617939 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.632912 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.646746 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.663108 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.674197 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.686096 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.697973 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.708169 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.711591 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.711646 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.711660 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.711677 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.711688 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.725874 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.742732 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.754553 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.766544 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.778146 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.790651 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.802406 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.813178 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.814227 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.814730 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.814756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.814816 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.814834 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.825786 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.877985 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.878024 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.878034 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.878049 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.878063 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.890083 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.893636 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.893669 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.893677 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.893692 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.893700 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.904575 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.907906 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.907947 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.907962 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.907977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.907988 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.920001 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.923680 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.923734 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.923745 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.923765 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.923776 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.935925 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.939577 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.939630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.939642 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.939662 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.939676 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.953373 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:06:55Z is after 2025-08-24T17:21:41Z" Nov 22 00:06:55 crc kubenswrapper[4928]: E1122 00:06:55.953487 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.956356 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.956388 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.956397 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.956414 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:55 crc kubenswrapper[4928]: I1122 00:06:55.956425 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:55Z","lastTransitionTime":"2025-11-22T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.058875 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.058950 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.058971 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.059000 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.059015 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.162273 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.162357 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.162371 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.162390 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.162402 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.203702 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:56 crc kubenswrapper[4928]: E1122 00:06:56.203958 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:56 crc kubenswrapper[4928]: E1122 00:06:56.204071 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:07:12.204035917 +0000 UTC m=+67.492067158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.266004 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.266170 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.266203 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.266250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.266280 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.369263 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.369319 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.369330 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.369350 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.369363 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.472105 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.472143 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.472152 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.472167 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.472177 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.574987 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.575027 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.575037 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.575055 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.575066 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.677155 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.677191 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.677201 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.677217 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.677226 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.779925 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.779969 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.779985 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.780002 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.780014 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.882250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.882346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.882378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.882409 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.882430 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.985194 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.985233 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.985246 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.985261 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:56 crc kubenswrapper[4928]: I1122 00:06:56.985270 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:56Z","lastTransitionTime":"2025-11-22T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.087908 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.087964 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.087976 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.087996 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.088009 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.191908 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.191965 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.191974 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.191995 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.192010 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.294820 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.294861 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.294871 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.294889 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.294899 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.317198 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.317329 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.317365 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.317442 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.317484 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:07:29.317469372 +0000 UTC m=+84.605500563 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.317600 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:07:29.317592855 +0000 UTC m=+84.605624046 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.317656 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.317677 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:07:29.317671657 +0000 UTC m=+84.605702848 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.396787 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.396882 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.396902 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.396930 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.396945 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.418174 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.418499 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.418567 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.418585 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.418678 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:07:29.418652009 +0000 UTC m=+84.706683280 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.500502 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.500568 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.500586 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.500617 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.500635 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.519367 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.519680 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.519746 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.519772 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.519925 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:07:29.519893469 +0000 UTC m=+84.807924700 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.603259 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.603339 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.603372 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.603424 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.603461 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.618135 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.618238 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.618140 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.618290 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.618438 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.618671 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.618954 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:57 crc kubenswrapper[4928]: E1122 00:06:57.618993 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.706181 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.706260 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.706288 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.706322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.706345 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.809558 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.809612 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.809630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.809651 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.809667 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.914252 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.914292 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.914301 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.914332 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:57 crc kubenswrapper[4928]: I1122 00:06:57.914345 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:57Z","lastTransitionTime":"2025-11-22T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.017282 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.017310 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.017322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.017339 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.017351 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.119877 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.119919 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.119930 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.119948 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.119962 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.223141 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.223501 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.223536 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.223562 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.223596 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.330525 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.330611 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.330635 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.330671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.330699 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.434054 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.434343 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.434509 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.434918 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.435134 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.538103 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.538387 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.538623 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.538760 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.539024 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.643269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.643365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.643386 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.643450 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.643469 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.746714 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.747022 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.747119 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.747200 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.747276 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.850504 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.850759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.850845 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.850911 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.850997 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.954155 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.954693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.954764 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.954867 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:58 crc kubenswrapper[4928]: I1122 00:06:58.954945 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:58Z","lastTransitionTime":"2025-11-22T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.058098 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.058413 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.058488 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.058596 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.058688 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.161158 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.161208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.161224 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.161244 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.161258 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.264244 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.264315 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.264340 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.264368 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.264389 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.367372 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.367431 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.367449 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.367479 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.367494 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.470545 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.470592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.470602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.470618 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.470629 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.573493 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.573937 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.573971 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.573992 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.574012 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.617532 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.617961 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.618205 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:06:59 crc kubenswrapper[4928]: E1122 00:06:59.618202 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.618269 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:06:59 crc kubenswrapper[4928]: E1122 00:06:59.618364 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:06:59 crc kubenswrapper[4928]: E1122 00:06:59.618458 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:06:59 crc kubenswrapper[4928]: E1122 00:06:59.618580 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.676690 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.677057 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.677181 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.677344 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.677447 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.780900 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.780984 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.781011 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.781043 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.781061 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.884537 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.884632 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.884649 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.884676 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.884702 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.987523 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.987602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.987625 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.987656 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:06:59 crc kubenswrapper[4928]: I1122 00:06:59.987679 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:06:59Z","lastTransitionTime":"2025-11-22T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.090231 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.090279 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.090295 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.090319 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.090336 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.193760 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.193895 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.193914 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.194202 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.194218 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.298492 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.298551 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.298568 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.298591 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.298615 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.400445 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.400526 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.400550 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.400583 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.400608 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.503872 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.503932 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.503949 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.503975 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.503992 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.607364 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.607407 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.607421 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.607438 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.607450 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.711132 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.711192 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.711209 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.711233 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.711250 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.815305 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.815365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.815378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.815399 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.815413 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.917918 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.917963 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.917974 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.917992 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:00 crc kubenswrapper[4928]: I1122 00:07:00.918003 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:00Z","lastTransitionTime":"2025-11-22T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.020509 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.020542 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.020551 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.020564 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.020572 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.123193 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.123238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.123251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.123269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.123282 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.226684 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.226759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.226787 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.226861 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.226885 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.329293 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.329332 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.329342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.329359 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.329371 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.431709 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.431759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.431772 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.431821 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.431839 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.531087 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.534357 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.534426 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.534446 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.534473 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.534491 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.544113 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.553186 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.564431 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.582073 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.606451 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.617560 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.617639 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.617682 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:01 crc kubenswrapper[4928]: E1122 00:07:01.617742 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:01 crc kubenswrapper[4928]: E1122 00:07:01.617896 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.617978 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:01 crc kubenswrapper[4928]: E1122 00:07:01.618079 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:01 crc kubenswrapper[4928]: E1122 00:07:01.618172 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.624147 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.636919 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.637251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.637313 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.637322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.637340 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.637371 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.656930 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.668500 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.681691 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.696286 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.714478 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.729019 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.739265 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.739345 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.739488 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.739500 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.739516 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.739528 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.751206 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.762360 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.771134 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:01Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.842515 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.842560 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.842573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.842591 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.842603 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.945375 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.945658 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.945743 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.945897 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:01 crc kubenswrapper[4928]: I1122 00:07:01.945978 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:01Z","lastTransitionTime":"2025-11-22T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.048173 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.048238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.048249 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.048264 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.048272 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.151837 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.151892 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.151908 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.151940 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.151958 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.253970 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.254011 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.254021 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.254038 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.254050 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.356667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.356717 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.356728 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.356746 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.356759 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.459047 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.459098 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.459109 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.459127 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.459140 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.562628 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.562937 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.563006 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.563086 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.563155 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.665702 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.665745 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.665756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.665773 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.665784 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.769047 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.769133 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.769159 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.769194 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.769218 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.871637 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.871681 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.871693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.871713 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.871726 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.974262 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.974310 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.974323 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.974342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:02 crc kubenswrapper[4928]: I1122 00:07:02.974362 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:02Z","lastTransitionTime":"2025-11-22T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.077062 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.077101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.077112 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.077129 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.077140 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.179154 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.179214 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.179225 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.179239 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.179248 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.281256 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.281298 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.281307 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.281320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.281329 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.383874 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.383950 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.383974 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.384005 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.384026 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.487369 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.487440 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.487465 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.487497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.487520 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.592213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.592310 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.592337 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.592370 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.592407 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.617436 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.617476 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.617520 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:03 crc kubenswrapper[4928]: E1122 00:07:03.617597 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:03 crc kubenswrapper[4928]: E1122 00:07:03.617671 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:03 crc kubenswrapper[4928]: E1122 00:07:03.617767 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.617966 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:03 crc kubenswrapper[4928]: E1122 00:07:03.618139 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.695349 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.695407 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.695423 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.695447 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.695463 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.797885 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.797940 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.797950 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.797965 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.797974 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.900096 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.900204 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.900214 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.900234 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:03 crc kubenswrapper[4928]: I1122 00:07:03.900252 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:03Z","lastTransitionTime":"2025-11-22T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.002625 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.002658 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.002668 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.002681 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.002699 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.104955 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.105566 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.105671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.105774 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.105892 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.208383 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.208416 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.208425 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.208442 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.208452 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.310277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.310320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.310330 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.310346 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.310357 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.412775 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.412879 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.412904 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.412935 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.412956 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.515615 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.515675 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.515692 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.515716 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.515732 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.618097 4928 scope.go:117] "RemoveContainer" containerID="3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354" Nov 22 00:07:04 crc kubenswrapper[4928]: E1122 00:07:04.618294 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.618985 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.619044 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.619057 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.619078 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.619094 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.722400 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.722471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.722493 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.722526 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.722546 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.825718 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.825754 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.825762 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.825776 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.825787 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.928537 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.928597 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.928607 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.928625 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:04 crc kubenswrapper[4928]: I1122 00:07:04.928634 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:04Z","lastTransitionTime":"2025-11-22T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.031858 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.031924 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.031935 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.031952 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.031961 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.135191 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.135226 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.135235 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.135250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.135259 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.238517 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.238588 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.238611 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.238643 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.238669 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.342054 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.342138 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.342159 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.342186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.342204 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.444865 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.444939 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.444970 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.445003 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.445026 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.547933 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.547977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.547988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.548009 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.548022 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.617537 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.617769 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.617769 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.617537 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:05 crc kubenswrapper[4928]: E1122 00:07:05.618422 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:05 crc kubenswrapper[4928]: E1122 00:07:05.617976 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:05 crc kubenswrapper[4928]: E1122 00:07:05.618400 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:05 crc kubenswrapper[4928]: E1122 00:07:05.618135 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.635876 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.651047 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.651116 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.651130 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.651174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.651188 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.655094 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.672471 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.686701 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.701205 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.719921 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.747853 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.752933 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.752978 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.752994 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.753017 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.753032 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.764029 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.784733 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.796413 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.828816 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.840984 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.852023 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.855471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.855511 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.855523 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.855539 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.855550 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.864880 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.882158 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.896172 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.910980 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:05Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.957495 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.957628 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.957638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.957665 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:05 crc kubenswrapper[4928]: I1122 00:07:05.957674 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:05Z","lastTransitionTime":"2025-11-22T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.060237 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.060297 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.060313 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.060334 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.060347 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.160264 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.160909 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.160929 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.160945 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.160955 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: E1122 00:07:06.171371 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:06Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.174597 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.174626 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.174637 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.174651 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.174663 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: E1122 00:07:06.186951 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:06Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.190743 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.190779 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.190811 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.190832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.190843 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: E1122 00:07:06.201608 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:06Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.204661 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.204767 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.204874 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.204959 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.205019 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: E1122 00:07:06.216516 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:06Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.219663 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.219696 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.219706 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.219722 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.219732 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: E1122 00:07:06.231965 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:06Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:06 crc kubenswrapper[4928]: E1122 00:07:06.232074 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.233536 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.233575 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.233588 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.233609 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.233622 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.336016 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.336061 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.336075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.336094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.336106 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.439113 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.439154 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.439167 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.439185 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.439197 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.541000 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.541043 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.541059 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.541075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.541087 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.643381 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.643578 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.643601 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.643627 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.643645 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.746866 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.746952 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.746978 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.747007 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.747028 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.850562 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.850630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.850649 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.850679 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.850698 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.954480 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.954547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.954562 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.954587 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:06 crc kubenswrapper[4928]: I1122 00:07:06.954605 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:06Z","lastTransitionTime":"2025-11-22T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.058725 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.058821 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.058834 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.058855 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.058868 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.161708 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.161740 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.161748 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.161762 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.161771 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.264873 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.264967 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.264990 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.265022 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.265044 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.368109 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.368161 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.368171 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.368229 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.368248 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.471201 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.471260 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.471272 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.471289 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.471301 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.574741 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.574826 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.574846 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.574877 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.574894 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.617670 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.617747 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.617762 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:07 crc kubenswrapper[4928]: E1122 00:07:07.617844 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:07 crc kubenswrapper[4928]: E1122 00:07:07.617911 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.617925 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:07 crc kubenswrapper[4928]: E1122 00:07:07.617968 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:07 crc kubenswrapper[4928]: E1122 00:07:07.618176 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.677535 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.677601 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.677616 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.677640 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.677657 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.780847 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.780925 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.780943 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.780972 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.780993 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.884623 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.884669 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.884682 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.884702 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.884714 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.987095 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.987132 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.987140 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.987156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:07 crc kubenswrapper[4928]: I1122 00:07:07.987166 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:07Z","lastTransitionTime":"2025-11-22T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.090117 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.090153 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.090162 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.090176 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.090187 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.193282 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.193316 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.193324 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.193337 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.193347 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.297573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.297630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.297648 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.297675 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.297694 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.400959 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.401000 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.401010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.401026 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.401035 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.503552 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.503608 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.503617 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.503647 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.503658 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.606334 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.606367 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.606374 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.606395 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.606403 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.709367 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.709414 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.709428 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.709449 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.709463 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.812470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.812514 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.812529 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.812556 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.812567 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.915690 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.915757 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.915774 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.915830 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:08 crc kubenswrapper[4928]: I1122 00:07:08.915852 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:08Z","lastTransitionTime":"2025-11-22T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.019023 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.019077 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.019086 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.019105 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.019117 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.122314 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.122427 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.122450 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.122483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.122506 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.225318 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.225369 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.225383 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.225402 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.225414 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.328689 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.328744 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.328756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.328779 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.328788 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.437168 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.437235 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.437249 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.437265 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.437276 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.539723 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.539777 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.539811 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.539832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.539846 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.617918 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.617986 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.618087 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.618388 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:09 crc kubenswrapper[4928]: E1122 00:07:09.618380 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:09 crc kubenswrapper[4928]: E1122 00:07:09.618557 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:09 crc kubenswrapper[4928]: E1122 00:07:09.618760 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:09 crc kubenswrapper[4928]: E1122 00:07:09.618851 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.643303 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.643335 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.643348 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.643365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.643379 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.746109 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.746179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.746193 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.746212 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.746226 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.849192 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.849244 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.849259 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.849277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.849288 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.951770 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.951842 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.951851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.951870 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:09 crc kubenswrapper[4928]: I1122 00:07:09.951881 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:09Z","lastTransitionTime":"2025-11-22T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.054304 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.054351 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.054404 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.054420 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.054429 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.157868 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.157921 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.157933 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.157956 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.157969 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.260670 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.260736 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.260748 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.260764 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.260775 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.366009 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.366100 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.366124 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.366160 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.366189 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.469277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.469340 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.469351 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.469370 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.469381 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.572544 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.572616 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.572635 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.572667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.572685 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.677842 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.677915 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.677967 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.678005 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.678032 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.780448 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.780508 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.780529 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.780550 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.780562 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.883574 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.883611 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.883638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.883655 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.883666 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.986163 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.986192 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.986201 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.986216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:10 crc kubenswrapper[4928]: I1122 00:07:10.986224 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:10Z","lastTransitionTime":"2025-11-22T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.090024 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.090103 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.090120 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.090149 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.090171 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.192767 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.192846 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.192857 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.192877 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.192889 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.295763 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.295824 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.295836 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.295857 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.295871 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.398951 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.398992 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.399004 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.399021 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.399032 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.501809 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.502072 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.502172 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.502269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.502374 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.605073 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.605110 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.605121 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.605137 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.605156 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.617616 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.617644 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:11 crc kubenswrapper[4928]: E1122 00:07:11.617763 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.617944 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.617947 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:11 crc kubenswrapper[4928]: E1122 00:07:11.618058 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:11 crc kubenswrapper[4928]: E1122 00:07:11.618355 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:11 crc kubenswrapper[4928]: E1122 00:07:11.618451 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.707593 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.707636 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.707654 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.707678 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.707694 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.810007 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.810069 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.810085 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.810112 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.810128 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.912927 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.912981 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.912993 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.913012 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:11 crc kubenswrapper[4928]: I1122 00:07:11.913024 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:11Z","lastTransitionTime":"2025-11-22T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.015933 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.015988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.015999 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.016019 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.016030 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.118623 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.118677 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.118691 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.118710 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.118721 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.221547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.221597 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.221610 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.221629 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.221642 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.285888 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:12 crc kubenswrapper[4928]: E1122 00:07:12.286102 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:07:12 crc kubenswrapper[4928]: E1122 00:07:12.286198 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:07:44.286174125 +0000 UTC m=+99.574205316 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.323504 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.323579 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.323602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.323630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.323683 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.427010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.427056 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.427065 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.427079 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.427087 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.529524 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.529895 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.530059 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.530223 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.530396 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.633370 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.633406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.633415 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.633429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.633438 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.736136 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.736177 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.736189 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.736206 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.736219 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.838401 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.838434 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.838443 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.838457 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.838467 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.941342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.941385 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.941395 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.941412 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:12 crc kubenswrapper[4928]: I1122 00:07:12.941422 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:12Z","lastTransitionTime":"2025-11-22T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.043906 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.043950 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.043961 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.043978 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.043990 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.147284 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.147358 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.147375 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.147402 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.147419 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.250846 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.250928 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.250951 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.250986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.251016 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.353697 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.354447 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.354583 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.354705 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.354846 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.457280 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.457317 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.457327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.457343 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.457356 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.559735 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.559768 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.559777 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.559811 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.559828 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.617774 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.617869 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.617869 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:13 crc kubenswrapper[4928]: E1122 00:07:13.618333 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:13 crc kubenswrapper[4928]: E1122 00:07:13.618400 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.617928 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:13 crc kubenswrapper[4928]: E1122 00:07:13.618478 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:13 crc kubenswrapper[4928]: E1122 00:07:13.618321 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.662189 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.662235 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.662247 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.662264 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.662277 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.765038 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.765076 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.765084 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.765099 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.765109 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.867199 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.867470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.867537 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.867600 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.867655 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.970004 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.970052 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.970064 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.970081 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:13 crc kubenswrapper[4928]: I1122 00:07:13.970093 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:13Z","lastTransitionTime":"2025-11-22T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.075173 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.075203 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.075212 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.075231 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.075240 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.178281 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.178336 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.178348 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.178368 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.178379 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.280769 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.280826 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.280835 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.280855 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.280865 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.383445 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.383524 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.383543 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.383573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.383594 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.454514 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/0.log" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.454615 4928 generic.go:334] "Generic (PLEG): container finished" podID="c474173e-56bf-467c-84e6-e473d89563c4" containerID="18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355" exitCode=1 Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.454680 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerDied","Data":"18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.455419 4928 scope.go:117] "RemoveContainer" containerID="18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.488174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.488512 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.488532 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.488556 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.488573 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.489046 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.503713 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.520223 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.534189 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.549765 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.565498 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.581087 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.597777 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.597856 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.597868 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.597987 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.597999 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.598985 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.612347 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.628198 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.638966 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.648992 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.659630 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.670532 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.679052 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.688640 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.696343 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:14Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.700108 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.700216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.700275 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.700349 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.700429 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.803324 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.803385 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.803395 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.803430 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.803441 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.906140 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.906314 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.906441 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.906564 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:14 crc kubenswrapper[4928]: I1122 00:07:14.906746 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:14Z","lastTransitionTime":"2025-11-22T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.010000 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.010236 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.010331 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.010401 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.010473 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.113416 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.113505 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.113514 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.113528 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.113537 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.215631 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.215692 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.215709 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.215736 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.215754 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.319480 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.319524 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.319534 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.319550 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.319559 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.421608 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.422374 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.422471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.422567 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.422659 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.459303 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/0.log" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.459372 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerStarted","Data":"7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.477881 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.494336 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.507231 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.518298 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.525093 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.525127 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.525136 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.525151 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.525162 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.530821 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.551081 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.567820 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.590243 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.605004 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.617834 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.617910 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:15 crc kubenswrapper[4928]: E1122 00:07:15.618009 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:15 crc kubenswrapper[4928]: E1122 00:07:15.618114 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.618268 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.618320 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:15 crc kubenswrapper[4928]: E1122 00:07:15.618406 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:15 crc kubenswrapper[4928]: E1122 00:07:15.618525 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.624380 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.627176 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.627237 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.627251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.627269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.627278 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.631831 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.648972 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.664425 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.680865 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.702694 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.720697 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.729708 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.729758 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.729774 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.729816 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.729834 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.735452 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.748219 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.762939 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.774732 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.789718 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.805178 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.819433 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.832114 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.832153 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.832166 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.832183 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.832193 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.832955 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.856266 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.867418 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.884071 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.899875 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.915173 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.929204 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.935472 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.935519 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.935534 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.935557 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.935577 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:15Z","lastTransitionTime":"2025-11-22T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.939782 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f15f601-986f-408b-836c-1c61ecf0fbef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.957105 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.970032 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:15 crc kubenswrapper[4928]: I1122 00:07:15.986002 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:15Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.013300 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.028334 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.038229 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.038278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.038292 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.038311 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.038323 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.141311 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.141345 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.141354 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.141369 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.141382 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.244619 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.244671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.244684 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.244702 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.244715 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.322117 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.322408 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.322542 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.322719 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.322894 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: E1122 00:07:16.338361 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.343217 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.343238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.343246 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.343258 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.343266 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: E1122 00:07:16.360494 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.364041 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.364089 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.364124 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.364136 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.364145 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: E1122 00:07:16.380619 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.384475 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.384531 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.384543 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.384558 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.384570 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: E1122 00:07:16.395912 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.399541 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.399581 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.399597 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.399612 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.399620 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: E1122 00:07:16.411015 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:16Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:16 crc kubenswrapper[4928]: E1122 00:07:16.411148 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.412453 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.412487 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.412497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.412516 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.412527 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.515520 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.515553 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.515561 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.515577 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.515586 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.618019 4928 scope.go:117] "RemoveContainer" containerID="3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.618247 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.618298 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.618308 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.618322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.618330 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.721487 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.721529 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.721547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.721573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.721594 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.824654 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.824696 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.824708 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.824730 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.824741 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.927438 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.927484 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.927496 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.927519 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:16 crc kubenswrapper[4928]: I1122 00:07:16.927533 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:16Z","lastTransitionTime":"2025-11-22T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.030079 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.030129 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.030139 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.030156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.030175 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.132513 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.132548 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.132557 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.132571 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.132582 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.236560 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.236591 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.236600 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.236616 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.236625 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.338630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.338663 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.338671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.338687 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.338699 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.441137 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.441185 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.441198 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.441219 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.441233 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.468297 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/2.log" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.471085 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.471737 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.484819 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.497663 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.511199 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.524546 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.539328 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.544324 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.544693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.544709 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.544743 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.544756 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.554706 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.574894 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.587672 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.599093 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.610829 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.618981 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:17 crc kubenswrapper[4928]: E1122 00:07:17.619146 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.619392 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:17 crc kubenswrapper[4928]: E1122 00:07:17.619447 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.619574 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:17 crc kubenswrapper[4928]: E1122 00:07:17.619629 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.619938 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:17 crc kubenswrapper[4928]: E1122 00:07:17.620090 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.621879 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f15f601-986f-408b-836c-1c61ecf0fbef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.635165 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.647188 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.648904 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.648958 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.648973 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.648992 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.649004 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.659912 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.672176 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.685934 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.701724 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.716099 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.752293 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.752339 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.752349 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.752370 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.752383 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.855350 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.855395 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.855405 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.855424 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.855438 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.957571 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.957616 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.957626 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.957642 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:17 crc kubenswrapper[4928]: I1122 00:07:17.957651 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:17Z","lastTransitionTime":"2025-11-22T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.059927 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.059972 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.059983 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.059998 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.060009 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.162109 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.162143 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.162152 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.162169 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.162178 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.264929 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.264967 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.264977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.264991 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.265001 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.369001 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.369895 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.369929 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.369951 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.369966 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.472291 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.472325 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.472336 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.472353 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.472364 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.475727 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/3.log" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.476526 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/2.log" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.479241 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" exitCode=1 Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.479295 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.479343 4928 scope.go:117] "RemoveContainer" containerID="3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.480031 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:07:18 crc kubenswrapper[4928]: E1122 00:07:18.480194 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.493782 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.505745 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.519820 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.556363 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.574765 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.574831 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.574847 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.574867 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.574878 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.589743 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.601344 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.619450 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f8cd6e128376256f4d00303ee3bc4018d1ce68512e274c822c4ff7c60cb0354\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:06:51Z\\\",\\\"message\\\":\\\"w:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1122 00:06:51.519106 6572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:17Z\\\",\\\"message\\\":\\\"for network=default: []services.lbConfig(nil)\\\\nI1122 00:07:17.445156 6948 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1122 00:07:17.445166 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:07:17.445169 6948 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protoc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.633050 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.646303 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.657888 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.670137 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.677075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.677253 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.677352 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.677454 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.677532 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.682675 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.694558 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f15f601-986f-408b-836c-1c61ecf0fbef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.707487 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.716754 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.727715 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.743901 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.756593 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:18Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.781779 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.781866 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.781877 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.781896 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.781990 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.886749 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.886823 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.886833 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.886851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.886863 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.988670 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.988915 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.988988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.989083 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:18 crc kubenswrapper[4928]: I1122 00:07:18.989155 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:18Z","lastTransitionTime":"2025-11-22T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.092373 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.092711 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.092866 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.092961 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.093054 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.195778 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.195872 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.195892 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.195917 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.195934 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.299106 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.299377 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.299468 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.299575 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.299660 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.402326 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.402656 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.402752 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.402875 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.402965 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.484260 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/3.log" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.487518 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:07:19 crc kubenswrapper[4928]: E1122 00:07:19.487644 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.506164 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.506715 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.506744 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.506755 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.506772 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.506783 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.523649 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.535777 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.551116 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.563583 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.576089 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.588856 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.600943 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.613094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.613148 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.613162 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.613179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.613193 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.614869 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.618138 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.618169 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.618366 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:19 crc kubenswrapper[4928]: E1122 00:07:19.618589 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.618727 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:19 crc kubenswrapper[4928]: E1122 00:07:19.618907 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:19 crc kubenswrapper[4928]: E1122 00:07:19.619208 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:19 crc kubenswrapper[4928]: E1122 00:07:19.619463 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.632713 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.643268 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.664979 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:17Z\\\",\\\"message\\\":\\\"for network=default: []services.lbConfig(nil)\\\\nI1122 00:07:17.445156 6948 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1122 00:07:17.445166 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:07:17.445169 6948 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protoc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:07:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.676766 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f15f601-986f-408b-836c-1c61ecf0fbef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.690339 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.703729 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.715673 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.715744 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.715758 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.715776 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.715831 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.717089 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.730678 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.743343 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:19Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.818676 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.818976 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.819071 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.819164 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.819265 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.921198 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.921229 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.921238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.921265 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:19 crc kubenswrapper[4928]: I1122 00:07:19.921274 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:19Z","lastTransitionTime":"2025-11-22T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.023733 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.024039 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.024190 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.024320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.024449 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.127644 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.127693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.127705 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.127722 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.127734 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.230409 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.230449 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.230460 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.230475 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.230485 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.333100 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.333175 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.333186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.333202 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.333214 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.434788 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.434873 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.434979 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.435005 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.435018 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.537275 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.537327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.537342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.537365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.537382 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.640134 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.640165 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.640173 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.640184 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.640194 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.743055 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.743091 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.743101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.743117 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.743129 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.845595 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.845642 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.845651 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.845664 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.845674 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.948166 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.948226 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.948237 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.948251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:20 crc kubenswrapper[4928]: I1122 00:07:20.948264 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:20Z","lastTransitionTime":"2025-11-22T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.051053 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.051127 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.051145 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.051214 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.051231 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.154135 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.154196 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.154210 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.154230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.154245 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.256894 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.256937 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.256949 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.256990 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.257005 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.359178 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.359223 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.359234 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.359249 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.359259 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.461229 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.461261 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.461273 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.461286 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.461295 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.564065 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.564102 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.564111 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.564124 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.564132 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.618075 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.618121 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.618153 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:21 crc kubenswrapper[4928]: E1122 00:07:21.618209 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.618462 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:21 crc kubenswrapper[4928]: E1122 00:07:21.618666 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:21 crc kubenswrapper[4928]: E1122 00:07:21.618562 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:21 crc kubenswrapper[4928]: E1122 00:07:21.618457 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.666290 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.666342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.666357 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.666374 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.666388 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.769014 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.769082 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.769105 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.769136 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.769160 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.871513 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.871584 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.871602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.871626 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.871638 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.975023 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.975303 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.975403 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.975500 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:21 crc kubenswrapper[4928]: I1122 00:07:21.975595 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:21Z","lastTransitionTime":"2025-11-22T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.078967 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.079024 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.079055 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.079344 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.079362 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.181616 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.181695 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.181712 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.181738 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.181755 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.284547 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.284612 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.284638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.284671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.284735 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.387518 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.387592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.387616 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.387643 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.387661 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.490361 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.490432 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.490458 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.490490 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.490513 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.593895 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.593939 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.594013 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.594039 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.594057 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.697606 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.697649 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.697662 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.697679 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.697691 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.800235 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.800278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.800291 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.800309 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.800323 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.903007 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.903034 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.903042 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.903055 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:22 crc kubenswrapper[4928]: I1122 00:07:22.903063 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:22Z","lastTransitionTime":"2025-11-22T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.005136 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.005175 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.005184 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.005197 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.005205 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.108562 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.108623 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.108641 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.108667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.108684 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.211573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.211620 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.211631 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.211648 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.211661 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.314686 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.314741 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.314757 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.314780 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.314825 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.418173 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.418238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.418256 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.418285 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.418304 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.521138 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.521201 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.521217 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.521241 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.521258 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.618406 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.618406 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.618519 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.618760 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:23 crc kubenswrapper[4928]: E1122 00:07:23.618917 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:23 crc kubenswrapper[4928]: E1122 00:07:23.618747 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:23 crc kubenswrapper[4928]: E1122 00:07:23.619056 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:23 crc kubenswrapper[4928]: E1122 00:07:23.619095 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.623426 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.623482 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.623502 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.623527 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.623546 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.727414 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.727465 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.727477 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.727493 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.727503 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.830151 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.830186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.830195 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.830209 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.830218 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.933238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.933298 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.933314 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.933337 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:23 crc kubenswrapper[4928]: I1122 00:07:23.933360 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:23Z","lastTransitionTime":"2025-11-22T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.064602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.064650 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.064659 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.064672 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.064681 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.167889 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.167939 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.167953 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.167971 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.167984 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.270471 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.270529 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.270546 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.270570 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.270586 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.373230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.373278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.373287 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.373301 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.373311 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.476198 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.476238 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.476247 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.476278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.476291 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.578691 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.578736 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.578749 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.578765 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.578777 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.681189 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.681241 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.681258 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.681281 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.681297 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.783415 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.783477 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.783493 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.783518 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.783535 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.885774 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.885903 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.885933 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.885964 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.885986 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.988965 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.989032 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.989052 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.989075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:24 crc kubenswrapper[4928]: I1122 00:07:24.989092 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:24Z","lastTransitionTime":"2025-11-22T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.092345 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.092406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.092423 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.092448 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.092463 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.195360 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.195400 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.195412 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.195429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.195442 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.298889 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.298927 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.298938 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.298955 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.298967 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.401725 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.401770 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.401783 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.401816 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.401830 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.503698 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.503763 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.503779 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.503832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.503850 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.606957 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.607007 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.607019 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.607037 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.607051 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.617506 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.617583 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:25 crc kubenswrapper[4928]: E1122 00:07:25.617629 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:25 crc kubenswrapper[4928]: E1122 00:07:25.617761 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.617844 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:25 crc kubenswrapper[4928]: E1122 00:07:25.617903 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.618176 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:25 crc kubenswrapper[4928]: E1122 00:07:25.618272 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.633637 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.648633 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.660807 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.674291 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.692726 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.704542 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.709150 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.709196 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.709213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.709233 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.709246 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.734101 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:17Z\\\",\\\"message\\\":\\\"for network=default: []services.lbConfig(nil)\\\\nI1122 00:07:17.445156 6948 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1122 00:07:17.445166 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:07:17.445169 6948 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protoc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:07:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.750724 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.765879 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.782943 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.796270 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f15f601-986f-408b-836c-1c61ecf0fbef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.810874 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.812667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.812710 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.812721 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.812737 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.812749 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.822502 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.835149 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.845937 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.859820 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.875313 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.887628 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:25Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.915657 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.915685 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.915693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.915707 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:25 crc kubenswrapper[4928]: I1122 00:07:25.915716 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:25Z","lastTransitionTime":"2025-11-22T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.017713 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.017771 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.017842 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.017871 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.017890 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.121209 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.121301 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.121327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.121364 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.121387 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.224279 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.224339 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.224355 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.224375 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.224389 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.327585 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.327639 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.327651 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.327670 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.327686 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.431265 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.431320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.431337 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.431364 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.431382 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.505297 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.505379 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.505399 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.505473 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.505495 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: E1122 00:07:26.534300 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.540280 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.540344 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.540356 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.540377 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.540390 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: E1122 00:07:26.564764 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.572742 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.572827 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.572843 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.572871 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.572893 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: E1122 00:07:26.588714 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.592968 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.592998 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.593011 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.593028 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.593042 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: E1122 00:07:26.609418 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.614749 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.614794 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.614802 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.614832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.614844 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: E1122 00:07:26.630754 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:26Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:26 crc kubenswrapper[4928]: E1122 00:07:26.630982 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.632659 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.632699 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.632710 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.632730 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.632765 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.736996 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.737044 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.737053 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.737068 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.737080 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.840169 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.840217 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.840226 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.840242 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.840252 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.944470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.944534 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.944548 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.944570 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:26 crc kubenswrapper[4928]: I1122 00:07:26.944584 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:26Z","lastTransitionTime":"2025-11-22T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.047332 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.047446 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.047470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.047495 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.047515 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.150557 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.150603 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.150614 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.150633 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.150644 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.253614 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.253650 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.253657 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.253672 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.253681 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.356945 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.357056 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.357073 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.357104 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.357121 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.460754 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.460871 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.460892 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.460923 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.460947 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.563713 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.563774 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.563797 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.563851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.563870 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.618208 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.618275 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.618885 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:27 crc kubenswrapper[4928]: E1122 00:07:27.619353 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.620070 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:27 crc kubenswrapper[4928]: E1122 00:07:27.620435 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:27 crc kubenswrapper[4928]: E1122 00:07:27.621175 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:27 crc kubenswrapper[4928]: E1122 00:07:27.621587 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.667216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.667354 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.667378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.667406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.667468 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.771326 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.771437 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.771459 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.771526 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.771547 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.874910 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.874951 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.874985 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.875005 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.875017 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.978539 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.978640 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.978655 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.978673 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:27 crc kubenswrapper[4928]: I1122 00:07:27.978685 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:27Z","lastTransitionTime":"2025-11-22T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.082204 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.082261 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.082280 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.082305 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.082324 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.185030 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.185083 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.185099 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.185121 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.185138 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.288507 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.288560 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.288571 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.288592 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.288611 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.392640 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.392689 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.392703 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.392722 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.392734 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.495519 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.495568 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.495579 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.495597 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.495608 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.598650 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.598693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.598705 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.598722 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.598734 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.700979 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.701036 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.701052 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.701073 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.701088 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.803924 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.803991 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.804001 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.804018 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.804027 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.906851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.906904 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.906920 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.906936 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:28 crc kubenswrapper[4928]: I1122 00:07:28.906948 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:28Z","lastTransitionTime":"2025-11-22T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.009400 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.009445 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.009465 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.009487 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.009502 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.111943 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.112015 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.112038 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.112067 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.112089 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.215439 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.215529 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.215562 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.215600 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.215627 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.318751 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.318888 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.318905 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.318939 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.318964 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.335611 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.335946 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.336041 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.336252 4928 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.336347 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:08:33.336318943 +0000 UTC m=+148.624350174 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.336766 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:33.336742845 +0000 UTC m=+148.624774076 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.336961 4928 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.337205 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 00:08:33.337004782 +0000 UTC m=+148.625036013 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.422021 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.422149 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.422168 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.422193 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.422205 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.437962 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.438252 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.438303 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.438331 4928 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.438486 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 00:08:33.438455267 +0000 UTC m=+148.726486498 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.526650 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.526708 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.526721 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.526743 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.526759 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.539847 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.540121 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.540218 4928 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.540243 4928 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.540344 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 00:08:33.540315204 +0000 UTC m=+148.828346425 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.618122 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.618227 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.618332 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.618446 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.618526 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.618646 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.618717 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:29 crc kubenswrapper[4928]: E1122 00:07:29.618876 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.629869 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.630351 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.630375 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.630408 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.630433 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.733541 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.733587 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.733599 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.733615 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.733624 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.837156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.837197 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.837208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.837245 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.837255 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.940319 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.940355 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.940364 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.940378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:29 crc kubenswrapper[4928]: I1122 00:07:29.940390 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:29Z","lastTransitionTime":"2025-11-22T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.048639 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.049060 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.049449 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.049510 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.049843 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.152767 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.152828 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.152838 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.152854 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.152863 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.255978 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.256054 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.256077 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.256103 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.256121 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.360042 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.360122 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.360147 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.360181 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.360205 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.464234 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.464316 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.464341 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.464372 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.464399 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.568735 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.568834 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.568853 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.568880 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.568898 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.619689 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:07:30 crc kubenswrapper[4928]: E1122 00:07:30.620164 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.673169 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.673216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.673230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.673248 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.673263 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.775748 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.775787 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.775853 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.775870 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.775881 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.878243 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.878302 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.878316 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.878337 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.878350 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.981705 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.981754 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.981770 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.981822 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:30 crc kubenswrapper[4928]: I1122 00:07:30.981840 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:30Z","lastTransitionTime":"2025-11-22T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.084497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.084595 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.084611 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.084634 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.084651 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.187247 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.187321 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.187345 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.187374 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.187398 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.291214 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.291283 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.291305 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.291333 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.291354 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.394759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.394873 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.394892 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.394918 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.394937 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.498480 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.498530 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.498543 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.498563 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.498578 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.601618 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.601844 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.601888 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.601921 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.601946 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.618051 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.618102 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.618122 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.618078 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:31 crc kubenswrapper[4928]: E1122 00:07:31.618223 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:31 crc kubenswrapper[4928]: E1122 00:07:31.618352 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:31 crc kubenswrapper[4928]: E1122 00:07:31.618443 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:31 crc kubenswrapper[4928]: E1122 00:07:31.618540 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.705918 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.705971 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.705986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.706003 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.706014 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.808659 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.808729 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.808743 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.808760 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.808772 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.911017 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.911063 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.911076 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.911093 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:31 crc kubenswrapper[4928]: I1122 00:07:31.911106 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:31Z","lastTransitionTime":"2025-11-22T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.014987 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.015052 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.015074 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.015101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.015125 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.118094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.118147 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.118163 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.118186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.118203 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.220925 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.220956 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.220964 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.220977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.220986 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.324001 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.324048 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.324058 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.324081 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.324096 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.426579 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.426639 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.426656 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.426679 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.426696 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.530094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.530182 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.530211 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.530241 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.530267 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.633078 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.633106 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.633114 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.633128 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.633139 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.736493 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.736579 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.736602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.736639 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.736663 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.839579 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.839625 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.839635 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.839650 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.839660 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.942077 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.942137 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.942150 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.942166 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:32 crc kubenswrapper[4928]: I1122 00:07:32.942179 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:32Z","lastTransitionTime":"2025-11-22T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.044381 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.044426 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.044438 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.044456 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.044467 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.146773 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.146842 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.146851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.146866 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.146875 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.249152 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.249190 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.249199 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.249214 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.249223 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.352213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.352267 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.352278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.352298 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.352314 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.457922 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.457986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.458003 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.458028 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.458041 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.560996 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.561030 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.561041 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.561056 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.561067 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.617630 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:33 crc kubenswrapper[4928]: E1122 00:07:33.617775 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.617641 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.617907 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.617836 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:33 crc kubenswrapper[4928]: E1122 00:07:33.617990 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:33 crc kubenswrapper[4928]: E1122 00:07:33.618036 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:33 crc kubenswrapper[4928]: E1122 00:07:33.618102 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.662818 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.662852 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.662865 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.662887 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.662899 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.765087 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.765156 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.765179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.765208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.765228 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.868027 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.868094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.868113 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.868136 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.868155 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.972719 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.972758 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.972768 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.972781 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:33 crc kubenswrapper[4928]: I1122 00:07:33.972805 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:33Z","lastTransitionTime":"2025-11-22T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.075893 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.075946 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.075956 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.075969 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.075994 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.211412 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.211499 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.211511 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.211529 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.211541 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.314599 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.314669 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.314686 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.314709 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.314728 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.417242 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.417278 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.417286 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.417300 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.417308 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.519128 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.519170 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.519183 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.519199 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.519211 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.621254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.621309 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.621319 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.621335 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.621347 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.723886 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.723929 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.723943 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.723959 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.723970 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.827721 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.827827 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.827853 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.827883 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.827905 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.930306 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.930341 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.930374 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.930390 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:34 crc kubenswrapper[4928]: I1122 00:07:34.930400 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:34Z","lastTransitionTime":"2025-11-22T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.033151 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.033206 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.033217 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.033232 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.033242 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.135986 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.136018 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.136026 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.136039 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.136048 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.239155 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.239205 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.239220 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.239254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.239278 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.341977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.342043 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.342060 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.342083 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.342099 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.444410 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.444466 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.444483 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.444505 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.444522 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.546945 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.546983 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.546994 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.547010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.547021 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.618184 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.618270 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.618296 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:35 crc kubenswrapper[4928]: E1122 00:07:35.618481 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.618554 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:35 crc kubenswrapper[4928]: E1122 00:07:35.618637 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:35 crc kubenswrapper[4928]: E1122 00:07:35.618740 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:35 crc kubenswrapper[4928]: E1122 00:07:35.618935 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.633249 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.648093 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.649497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.649536 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.649544 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.649558 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.649582 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.659602 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.675271 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.692109 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.709877 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.727295 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.745471 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.751362 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.751386 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.751394 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.751406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.751415 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.758593 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.769441 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.780032 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.797890 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:17Z\\\",\\\"message\\\":\\\"for network=default: []services.lbConfig(nil)\\\\nI1122 00:07:17.445156 6948 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1122 00:07:17.445166 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:07:17.445169 6948 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protoc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:07:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.807358 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f15f601-986f-408b-836c-1c61ecf0fbef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.819814 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.832895 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.844376 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.854166 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.854273 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.854306 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.854318 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.854334 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.854345 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.865532 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:35Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.957033 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.957068 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.957078 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.957094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:35 crc kubenswrapper[4928]: I1122 00:07:35.957104 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:35Z","lastTransitionTime":"2025-11-22T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.059977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.060021 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.060031 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.060046 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.060056 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.162430 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.162469 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.162480 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.162497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.162508 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.265138 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.265204 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.265221 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.265246 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.265264 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.367254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.367309 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.367320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.367335 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.367345 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.470179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.470256 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.470269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.470285 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.470294 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.619414 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.619530 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.619550 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.619582 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.619602 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.723925 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.723970 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.723980 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.723999 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.724011 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.807507 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.807658 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.807691 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.807723 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.807746 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: E1122 00:07:36.825754 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.830306 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.830344 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.830360 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.830386 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.830403 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: E1122 00:07:36.847036 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.850880 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.850928 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.850942 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.850977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.850997 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: E1122 00:07:36.876919 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.882906 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.882947 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.882961 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.882980 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.882993 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: E1122 00:07:36.904019 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.908400 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.908467 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.908484 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.908505 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.908519 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:36 crc kubenswrapper[4928]: E1122 00:07:36.925692 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:36Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:36 crc kubenswrapper[4928]: E1122 00:07:36.926016 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.927963 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.928008 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.928023 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.928042 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:36 crc kubenswrapper[4928]: I1122 00:07:36.928055 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:36Z","lastTransitionTime":"2025-11-22T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.030533 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.030577 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.030587 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.030604 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.030615 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.132848 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.132899 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.132914 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.132933 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.132948 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.235513 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.235558 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.235566 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.235587 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.235618 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.338336 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.338378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.338389 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.338406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.338419 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.441858 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.441932 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.441950 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.441977 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.441991 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.546055 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.546129 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.546147 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.546176 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.546197 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.617464 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:37 crc kubenswrapper[4928]: E1122 00:07:37.617716 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.618134 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:37 crc kubenswrapper[4928]: E1122 00:07:37.618248 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.618473 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:37 crc kubenswrapper[4928]: E1122 00:07:37.618572 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.619507 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:37 crc kubenswrapper[4928]: E1122 00:07:37.619695 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.642403 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.650746 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.650858 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.650877 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.650902 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.650921 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.753233 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.753290 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.753301 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.753314 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.753341 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.855478 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.855538 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.855593 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.855618 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.855637 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.958729 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.958831 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.958848 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.958867 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:37 crc kubenswrapper[4928]: I1122 00:07:37.958879 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:37Z","lastTransitionTime":"2025-11-22T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.061459 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.061497 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.061504 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.061538 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.061550 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.164264 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.164302 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.164311 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.164326 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.164335 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.266586 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.266654 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.266673 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.266699 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.266716 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.370446 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.370542 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.370553 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.370574 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.370586 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.473161 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.473206 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.473223 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.473245 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.473261 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.575368 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.575406 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.575426 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.575441 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.575451 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.678295 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.678380 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.678397 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.678420 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.678436 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.781277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.781335 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.781357 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.781380 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.781393 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.884564 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.884615 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.884630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.884651 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.884666 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.987477 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.987541 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.987562 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.987591 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:38 crc kubenswrapper[4928]: I1122 00:07:38.987613 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:38Z","lastTransitionTime":"2025-11-22T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.090072 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.090678 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.090709 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.090737 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.090754 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.193578 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.193633 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.193646 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.193667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.193678 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.296050 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.296088 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.296100 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.296117 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.296127 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.400693 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.400737 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.400748 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.400762 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.400771 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.503898 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.503944 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.503959 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.503981 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.503995 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.606520 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.606594 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.606614 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.606639 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.606656 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.618354 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.618415 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.618439 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:39 crc kubenswrapper[4928]: E1122 00:07:39.618550 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.618579 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:39 crc kubenswrapper[4928]: E1122 00:07:39.618716 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:39 crc kubenswrapper[4928]: E1122 00:07:39.618881 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:39 crc kubenswrapper[4928]: E1122 00:07:39.618995 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.709764 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.709863 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.709882 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.709908 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.709927 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.813230 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.813295 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.813311 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.813331 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.813346 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.916838 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.916896 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.916918 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.916949 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:39 crc kubenswrapper[4928]: I1122 00:07:39.916967 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:39Z","lastTransitionTime":"2025-11-22T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.020423 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.020479 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.020490 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.020510 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.020523 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.122626 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.122673 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.122690 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.122709 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.122722 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.225532 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.225572 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.225580 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.225593 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.225602 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.329991 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.330057 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.330073 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.330098 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.330115 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.433296 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.433582 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.433590 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.433605 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.433613 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.536603 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.536650 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.536667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.536691 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.536708 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.638580 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.638633 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.638651 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.638676 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.638697 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.740338 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.740384 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.740396 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.740417 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.740429 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.843162 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.843206 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.843217 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.843233 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.843248 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.945790 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.945864 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.945880 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.945900 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:40 crc kubenswrapper[4928]: I1122 00:07:40.945912 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:40Z","lastTransitionTime":"2025-11-22T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.048096 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.048175 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.048189 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.048208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.048293 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.151276 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.151327 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.151338 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.151353 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.151361 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.253898 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.253957 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.253975 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.253996 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.254013 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.356070 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.356145 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.356171 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.356245 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.356266 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.459251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.459315 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.459329 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.459350 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.459363 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.562366 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.562421 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.562437 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.562460 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.562478 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.625668 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.625848 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.625898 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.625943 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:41 crc kubenswrapper[4928]: E1122 00:07:41.625864 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:41 crc kubenswrapper[4928]: E1122 00:07:41.626040 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:41 crc kubenswrapper[4928]: E1122 00:07:41.626099 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:41 crc kubenswrapper[4928]: E1122 00:07:41.626203 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.665505 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.665558 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.665574 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.665599 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.665618 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.768489 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.768577 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.768596 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.768619 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.768640 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.872076 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.872146 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.872162 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.872188 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.872206 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.975452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.975509 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.975526 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.975549 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:41 crc kubenswrapper[4928]: I1122 00:07:41.975563 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:41Z","lastTransitionTime":"2025-11-22T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.079701 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.079839 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.079869 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.079904 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.079931 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.183119 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.183223 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.183243 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.183275 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.183295 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.285820 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.285886 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.285909 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.285940 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.285961 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.389487 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.389552 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.389570 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.389594 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.389611 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.492920 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.492994 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.493012 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.493036 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.493055 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.597317 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.597375 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.597394 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.597418 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.597436 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.620364 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:07:42 crc kubenswrapper[4928]: E1122 00:07:42.620745 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.700636 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.700839 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.700869 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.700897 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.700920 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.804320 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.804383 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.804401 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.804425 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.804443 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.907746 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.907829 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.907846 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.907865 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:42 crc kubenswrapper[4928]: I1122 00:07:42.907880 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:42Z","lastTransitionTime":"2025-11-22T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.010511 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.010548 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.010561 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.010579 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.010590 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.112752 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.112832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.112844 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.112861 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.112876 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.216042 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.216082 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.216091 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.216104 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.216113 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.318983 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.319027 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.319035 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.319050 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.319058 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.422208 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.422281 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.422299 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.422324 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.422343 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.524694 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.524740 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.524756 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.524777 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.524813 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.617432 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.617501 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.617536 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:43 crc kubenswrapper[4928]: E1122 00:07:43.617594 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.617614 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:43 crc kubenswrapper[4928]: E1122 00:07:43.617752 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:43 crc kubenswrapper[4928]: E1122 00:07:43.617867 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:43 crc kubenswrapper[4928]: E1122 00:07:43.617931 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.627197 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.627241 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.627258 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.627277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.627291 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.730213 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.730265 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.730285 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.730305 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.730319 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.837362 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.837412 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.837428 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.837450 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.837467 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.940889 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.940964 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.940984 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.941010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:43 crc kubenswrapper[4928]: I1122 00:07:43.941028 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:43Z","lastTransitionTime":"2025-11-22T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.044369 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.044565 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.044601 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.044634 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.044657 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.148101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.148159 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.148171 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.148189 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.148201 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.250791 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.250847 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.250861 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.250881 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.250892 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.311176 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:44 crc kubenswrapper[4928]: E1122 00:07:44.311345 4928 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:07:44 crc kubenswrapper[4928]: E1122 00:07:44.311446 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs podName:b3aa30f0-2a7f-4b60-b254-3e894a581f89 nodeName:}" failed. No retries permitted until 2025-11-22 00:08:48.311420486 +0000 UTC m=+163.599451707 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs") pod "network-metrics-daemon-kx6zl" (UID: "b3aa30f0-2a7f-4b60-b254-3e894a581f89") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.354092 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.354143 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.354155 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.354171 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.354186 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.456860 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.456936 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.456950 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.456989 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.457003 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.559659 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.559727 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.559745 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.559769 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.559787 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.662362 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.662440 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.662475 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.662543 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.662567 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.765664 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.765732 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.765758 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.765843 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.765875 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.868988 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.869061 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.869089 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.869118 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.869141 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.973014 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.973080 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.973104 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.973132 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:44 crc kubenswrapper[4928]: I1122 00:07:44.973156 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:44Z","lastTransitionTime":"2025-11-22T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.075765 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.075851 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.075866 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.075884 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.075900 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.179218 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.179300 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.179329 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.179367 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.179393 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.282361 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.282389 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.282397 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.282408 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.282419 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.385654 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.385715 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.385732 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.385754 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.385770 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.489123 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.489178 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.489196 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.489219 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.489236 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.591162 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.591228 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.591251 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.591281 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.591304 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.618191 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.618239 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.618246 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:45 crc kubenswrapper[4928]: E1122 00:07:45.618463 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.618493 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:45 crc kubenswrapper[4928]: E1122 00:07:45.618940 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:45 crc kubenswrapper[4928]: E1122 00:07:45.619326 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:45 crc kubenswrapper[4928]: E1122 00:07:45.619457 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.633861 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe5fdda-8d76-4a8d-8a8d-d25be738257a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87e892ca5e0665be6285dce09c13baff951eb1b769532c1b139517e1c0e91b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef08c6da00a3a64de0c044927dfc4cc482bc16403a3b914e457410e85f127cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wp4s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9g8jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.649417 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dsr54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1144bf-61e9-4de0-a360-d6252730bbf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbeeb0156ae44d232fe1b2832140e10795f344b5b00ef4c3e0dcc5fa558136f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw2xg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dsr54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.664503 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.683525 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3487b13a-0c53-417a-a73c-4709af1f7a10\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d55ec606decfa92e80435c75e4334f3af401ab6e2a3a85f755bc5ab60fbf97d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f20d3183f9055b1c143bcc2d6d064435471cff86d8088fca676a74f6a7ceb6a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506bcbc8e4e3fe08acc912936c52c0534c70a82e69635dd4102df3f1b847ddd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b46df1696dece19e4c6721e75d6eabc4d7fdcf5b1eb32ff5465b6d9400b3d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bdf9d75633c348fe09a5807c61dd2e0e24531ad89e0b7dfa70656b7b3a8c95f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab18db522503f1ee5ead950cd36f7262036aa4a4206ab6a83e969193748ae17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a7015d831480eb2b2a75b91a287d8187c1044df93f2023579f43d6bfd8c726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjp9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx2zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.693661 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.693719 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.693729 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.693743 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.693753 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.699182 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"897ce64d-0600-4c6a-b684-0c1683fa14bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9424296c470490d049270c706dffaabd4b94420af8a6300e9e15f57f88fd6686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tnb6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hp7cp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.715941 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xmk2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c474173e-56bf-467c-84e6-e473d89563c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:13Z\\\",\\\"message\\\":\\\"2025-11-22T00:06:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27\\\\n2025-11-22T00:06:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d4923ffc-26f4-4e28-b539-a1bc02f4ba27 to /host/opt/cni/bin/\\\\n2025-11-22T00:06:28Z [verbose] multus-daemon started\\\\n2025-11-22T00:06:28Z [verbose] Readiness Indicator file check\\\\n2025-11-22T00:07:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hpnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xmk2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.727585 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lwwmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07bc9047-a464-4661-8a73-9d5919a4f15f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378c39793deccb986522ae643c2da3b423aa1711bc63648c3cbf01db65f95823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pbm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lwwmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.744081 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.756578 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f07c26440a0bad2e99acdfb61a83a10b11d9f86ced2aff97e088d044dd4ee8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.782584 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff0b6ec5af2df3ade9a536ad32d4a7a9b7367a18d67d6013f32ce65f8d70e67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c66b119e81af2b02a1d1409aaceb9e363d057c277cec240c46fd9aeb75173d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.796308 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.796362 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.796379 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.796401 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.796418 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.803581 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8351630-2c5a-4308-8441-3b71f7ac8636\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d611bed170e033441a0ada0f977cbf305383f2fc559122eae9c66375d4807655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b57771fb9f7ac7fc2cbde7ab2bd0aacdd4e0ea396ec915ed9fb7acd87d7bd82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f5a8a1806734be5162a454c5faa6bb5dba16d8908d7422521c1c4c6e95bbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9a396d655ef176af8db1478f44df9b4a1a366aef3c4406cb3a550b1131f8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://149a0664b676e3ba87ac6f19b27bb00eb77fa2b4763df9251282f8e527ecca13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f49ab5a57258af6ebf338f99b142b303c6297ba3bd1d6fd1180ef70f07be521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f49ab5a57258af6ebf338f99b142b303c6297ba3bd1d6fd1180ef70f07be521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0f27b367b69b9449a57e942f29314b30bcbf987b113e93163adac3d5beef4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0f27b367b69b9449a57e942f29314b30bcbf987b113e93163adac3d5beef4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://66e102144df16586c53c5a2906c0540d2c10a28b773c60e3ffcf5f5304dacb08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66e102144df16586c53c5a2906c0540d2c10a28b773c60e3ffcf5f5304dacb08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.831456 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31460a9d-62d5-4dae-bb57-4db45544352d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T00:07:17Z\\\",\\\"message\\\":\\\"for network=default: []services.lbConfig(nil)\\\\nI1122 00:07:17.445156 6948 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1122 00:07:17.445166 6948 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:17Z is after 2025-08-24T17:21:41Z]\\\\nI1122 00:07:17.445169 6948 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protoc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:07:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-74sb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nchmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.846124 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3aa30f0-2a7f-4b60-b254-3e894a581f89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq7wl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kx6zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.864957 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56ccfbdb-bdb5-4d94-a630-0e8326e1b9a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b9447511e73bd7a6ef1b2c4a024a7683715f1e98431edb09826ef34bc291f49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6a226180c8277ed5d342a1cf233519fa403e5a9d80639a62e45cd5c1c9ad5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5578231cf43d9181e8a7f9a5fd0119ffe84cd02a893e76a54a228f172d5ec67d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2a5d6a2fa973cfa6e1fda481489feecd8388b354dd6a7e7a068f148d5d105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddbe52a78c8e5cb0751c977daed9ca2a785a975047bf91fae9a4697637576549\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1122 00:06:19.569136 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 00:06:19.572947 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-131499599/tls.crt::/tmp/serving-cert-131499599/tls.key\\\\\\\"\\\\nI1122 00:06:25.103074 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1122 00:06:25.107686 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1122 00:06:25.107718 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1122 00:06:25.107752 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1122 00:06:25.107759 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1122 00:06:25.125343 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1122 00:06:25.125380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125385 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1122 00:06:25.125392 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1122 00:06:25.125395 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1122 00:06:25.125397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1122 00:06:25.125400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1122 00:06:25.125782 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1122 00:06:25.127812 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e0d5a499e8f062a04e7b581c91c639f1806ebbc4738272b3ec09c90e39b48d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f28e40bf25db990c7040380dfd420b5735ed8dc8fbbc75ca754ebc06eb517f1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.877735 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ff736217f7e92a18227180753ea3acf5b646908fc8c48a84feaf8f87ca96b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.892874 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.898276 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.898315 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.898326 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.898341 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.898352 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:45Z","lastTransitionTime":"2025-11-22T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.909701 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"279d01b1-dfa9-4f88-9732-cc1ef2b7d91b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f7a67672f32118ccc6fd888ae51bd40d45fdc973168d3a31d45a4d7fdcb947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55dd3a4e73ec5645006a48d18605b1e3f1a96cada3c5895bfcc10dea861e698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2f7231dd5ee03fb1926ca0a3e57fa2d71013d1ddfd9aa465fa60ebac22b8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f33221bc5927abb9d440a7f5e93ef8d011b16384b3274406cab3b93ab4c66e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.925664 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768ca9a5-1e81-4e29-a752-b2fe9d165336\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63b2d1f76c16c7fa54b9b4ed3ecc2401c9f298b36759750e1e0f92b1e22b4162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e22085c76b28964827a41bc65eb8aa0c0022211d51bf7d867982149e43644963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5211fb56e5a2da5731ef972020e11fda897cd640fdc11c1a2888f2d4dcb2b6bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863f107a12ade62b8b689c3933e06f17719056594765113a1687f786fafcff35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:45 crc kubenswrapper[4928]: I1122 00:07:45.940454 4928 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f15f601-986f-408b-836c-1c61ecf0fbef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T00:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9328d62b6a51b26907125b6fc9fcf05b557e7e9668726cc0814e83cda4220d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T00:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69dae3205a1f382eaf3d731198ceb9cf951e6222d15d2385366fd039f4aca84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T00:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T00:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T00:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:45Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.001444 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.001491 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.001503 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.001520 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.001532 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.104689 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.104757 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.104780 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.104845 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.104866 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.207214 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.207246 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.207256 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.207269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.207278 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.309399 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.309461 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.309478 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.309501 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.309520 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.412900 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.412959 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.413098 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.413445 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.413507 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.517625 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.517707 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.517725 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.518122 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.518493 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.621452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.621511 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.621532 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.621557 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.621580 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.723966 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.724024 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.724040 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.724065 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.724083 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.826919 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.826989 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.827010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.827031 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.827045 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.930188 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.930222 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.930233 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.930250 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:46 crc kubenswrapper[4928]: I1122 00:07:46.930261 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:46Z","lastTransitionTime":"2025-11-22T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.032165 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.032200 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.032209 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.032222 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.032231 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.134786 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.134875 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.134888 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.134906 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.134928 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.236879 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.236927 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.236940 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.236956 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.236967 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.300958 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.301034 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.301051 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.301078 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.301101 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.325272 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.332070 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.332145 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.332165 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.332638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.333100 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.356326 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.361775 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.361868 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.361888 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.361916 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.361938 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.382893 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.388240 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.388313 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.388342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.388377 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.388403 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.409453 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.414453 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.414509 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.414522 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.414541 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.414555 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.431959 4928 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T00:07:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a2f9aed2-6a0f-4618-a8fa-7fae68df5ae6\\\",\\\"systemUUID\\\":\\\"c769a56a-a12f-4ddf-8e71-bd2116048a86\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T00:07:47Z is after 2025-08-24T17:21:41Z" Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.432165 4928 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.433903 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.433944 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.433980 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.433997 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.434008 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.537599 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.537656 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.537671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.537691 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.537704 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.617579 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.617853 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.618039 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.618028 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.618093 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.618316 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.618440 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:47 crc kubenswrapper[4928]: E1122 00:07:47.619245 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.641398 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.641466 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.641480 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.641505 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.641523 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.743900 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.743987 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.744014 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.744046 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.744071 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.848337 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.848474 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.848492 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.848515 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.848533 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.952748 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.952868 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.952901 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.952935 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:47 crc kubenswrapper[4928]: I1122 00:07:47.952956 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:47Z","lastTransitionTime":"2025-11-22T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.056215 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.056272 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.056290 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.056316 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.056337 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.160329 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.160413 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.160440 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.160479 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.160503 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.263727 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.263897 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.263928 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.263967 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.263995 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.368349 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.368443 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.368469 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.368505 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.368528 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.472332 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.472402 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.472430 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.472514 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.472542 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.576751 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.576856 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.576882 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.576914 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.576938 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.679895 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.679978 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.679999 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.680028 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.680051 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.783056 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.783098 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.783109 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.783126 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.783140 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.887184 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.887254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.887277 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.887308 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.887326 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.991321 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.991380 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.991393 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.991416 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:48 crc kubenswrapper[4928]: I1122 00:07:48.991432 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:48Z","lastTransitionTime":"2025-11-22T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.094101 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.094170 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.094189 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.094216 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.094234 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.197970 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.198073 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.198097 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.198134 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.198158 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.301211 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.301262 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.301272 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.301288 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.301299 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.404973 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.405148 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.405177 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.405212 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.405236 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.509440 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.509525 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.509546 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.509582 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.509612 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.613924 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.614008 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.614030 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.614061 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.614084 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.617524 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.617589 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:49 crc kubenswrapper[4928]: E1122 00:07:49.617710 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.617958 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:49 crc kubenswrapper[4928]: E1122 00:07:49.618174 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:49 crc kubenswrapper[4928]: E1122 00:07:49.617963 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.617990 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:49 crc kubenswrapper[4928]: E1122 00:07:49.618611 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.716612 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.716690 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.716708 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.716737 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.716758 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.819484 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.819543 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.819554 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.819573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.819586 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.922949 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.923030 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.923049 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.923075 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:49 crc kubenswrapper[4928]: I1122 00:07:49.923096 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:49Z","lastTransitionTime":"2025-11-22T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.026734 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.026853 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.026884 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.026960 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.026987 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.130849 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.131265 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.131453 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.131662 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.132075 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.235931 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.236258 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.236419 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.236580 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.236725 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.340568 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.341051 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.341206 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.341348 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.341507 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.444474 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.444781 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.444854 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.444887 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.444975 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.547704 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.547830 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.547844 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.547863 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.547875 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.649981 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.650031 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.650043 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.650060 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.650075 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.753186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.753602 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.753914 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.754174 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.754389 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.858201 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.858253 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.858269 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.858288 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.858302 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.962120 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.962186 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.962212 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.962240 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:50 crc kubenswrapper[4928]: I1122 00:07:50.962257 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:50Z","lastTransitionTime":"2025-11-22T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.066049 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.066122 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.066140 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.066162 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.066176 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.170758 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.170938 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.170999 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.171029 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.171088 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.276657 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.276731 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.276749 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.276779 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.276897 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.380164 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.380319 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.380341 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.380371 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.380393 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.484430 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.484498 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.484515 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.484543 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.484562 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.588197 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.588262 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.588279 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.588308 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.588328 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.618036 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.618081 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.618110 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:51 crc kubenswrapper[4928]: E1122 00:07:51.618272 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.618331 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:51 crc kubenswrapper[4928]: E1122 00:07:51.618702 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:51 crc kubenswrapper[4928]: E1122 00:07:51.618765 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:51 crc kubenswrapper[4928]: E1122 00:07:51.619219 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.691691 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.691760 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.691779 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.691837 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.691859 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.796286 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.796355 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.796373 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.796399 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.796417 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.898788 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.898871 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.898888 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.898911 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:51 crc kubenswrapper[4928]: I1122 00:07:51.898927 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:51Z","lastTransitionTime":"2025-11-22T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.004020 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.004163 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.004185 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.004245 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.004266 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.107179 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.107240 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.107254 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.107276 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.107319 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.210818 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.210880 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.210894 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.210923 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.210953 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.314767 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.314844 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.314856 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.314874 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.314886 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.417831 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.417889 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.417901 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.417917 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.417928 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.521322 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.521387 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.521407 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.521428 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.521443 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.624445 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.624495 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.624508 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.624525 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.624535 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.727748 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.727908 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.727937 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.727972 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.727995 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.831318 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.831366 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.831378 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.831397 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.831406 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.934958 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.935042 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.935493 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.935539 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:52 crc kubenswrapper[4928]: I1122 00:07:52.935561 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:52Z","lastTransitionTime":"2025-11-22T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.039312 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.039385 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.039401 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.039426 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.039439 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.143005 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.143082 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.143098 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.143123 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.143141 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.247062 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.247152 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.247170 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.247196 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.247220 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.354089 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.354181 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.354204 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.354240 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.354264 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.457504 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.457560 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.457576 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.457601 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.457616 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.560334 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.560419 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.560441 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.560470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.560491 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.617663 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.617783 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.617937 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.617838 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:53 crc kubenswrapper[4928]: E1122 00:07:53.618082 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:53 crc kubenswrapper[4928]: E1122 00:07:53.618186 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:53 crc kubenswrapper[4928]: E1122 00:07:53.619047 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:53 crc kubenswrapper[4928]: E1122 00:07:53.619403 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.663503 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.663590 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.663630 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.663673 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.663700 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.767210 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.767315 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.767341 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.767382 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.767410 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.870462 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.870548 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.870569 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.870606 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.870630 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.973615 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.973681 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.973694 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.973719 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:53 crc kubenswrapper[4928]: I1122 00:07:53.973732 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:53Z","lastTransitionTime":"2025-11-22T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.076759 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.076902 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.076935 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.076966 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.076988 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.180124 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.180181 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.180197 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.180220 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.180242 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.283010 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.283081 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.283099 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.283126 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.283145 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.385696 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.385775 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.385846 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.385893 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.385916 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.488581 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.488654 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.488679 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.488706 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.488723 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.592149 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.592204 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.592215 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.592233 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.592247 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.619405 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:07:54 crc kubenswrapper[4928]: E1122 00:07:54.619761 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nchmx_openshift-ovn-kubernetes(31460a9d-62d5-4dae-bb57-4db45544352d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.694503 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.694551 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.694562 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.694579 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.694588 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.796648 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.796719 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.796737 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.796761 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.796780 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.899225 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.899297 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.899321 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.899360 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:54 crc kubenswrapper[4928]: I1122 00:07:54.899385 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:54Z","lastTransitionTime":"2025-11-22T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.003273 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.003351 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.003373 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.003397 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.003418 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.106478 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.106723 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.106762 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.106829 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.106859 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.210325 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.210373 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.210389 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.210413 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.210429 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.312341 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.312395 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.312410 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.312432 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.312447 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.414349 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.414391 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.414400 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.414412 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.414422 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.516399 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.516430 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.516438 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.516452 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.516460 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.618225 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.618264 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.618276 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:55 crc kubenswrapper[4928]: E1122 00:07:55.618386 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.618609 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:55 crc kubenswrapper[4928]: E1122 00:07:55.618700 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:55 crc kubenswrapper[4928]: E1122 00:07:55.618945 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:55 crc kubenswrapper[4928]: E1122 00:07:55.619161 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.619761 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.619782 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.619808 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.619822 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.619837 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.671834 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.671814329 podStartE2EDuration="18.671814329s" podCreationTimestamp="2025-11-22 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.671573573 +0000 UTC m=+110.959604784" watchObservedRunningTime="2025-11-22 00:07:55.671814329 +0000 UTC m=+110.959845520" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.705552 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=40.705527375 podStartE2EDuration="40.705527375s" podCreationTimestamp="2025-11-22 00:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.705202606 +0000 UTC m=+110.993233827" watchObservedRunningTime="2025-11-22 00:07:55.705527375 +0000 UTC m=+110.993558566" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.723527 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.723575 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.723587 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.723740 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.723756 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.731237 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.731218402 podStartE2EDuration="1m30.731218402s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.719755008 +0000 UTC m=+111.007786219" watchObservedRunningTime="2025-11-22 00:07:55.731218402 +0000 UTC m=+111.019249593" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.776877 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.776854975 podStartE2EDuration="54.776854975s" podCreationTimestamp="2025-11-22 00:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.756622092 +0000 UTC m=+111.044653293" watchObservedRunningTime="2025-11-22 00:07:55.776854975 +0000 UTC m=+111.064886166" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.777240 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.777234267 podStartE2EDuration="1m29.777234267s" podCreationTimestamp="2025-11-22 00:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.77631494 +0000 UTC m=+111.064346151" watchObservedRunningTime="2025-11-22 00:07:55.777234267 +0000 UTC m=+111.065265458" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.804638 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9g8jb" podStartSLOduration=90.804620262 podStartE2EDuration="1m30.804620262s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.804525669 +0000 UTC m=+111.092556870" watchObservedRunningTime="2025-11-22 00:07:55.804620262 +0000 UTC m=+111.092651453" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.805051 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kx2zm" podStartSLOduration=90.805045864 podStartE2EDuration="1m30.805045864s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.79288355 +0000 UTC m=+111.080914761" watchObservedRunningTime="2025-11-22 00:07:55.805045864 +0000 UTC m=+111.093077055" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.819316 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dsr54" podStartSLOduration=90.819297139 podStartE2EDuration="1m30.819297139s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.818949728 +0000 UTC m=+111.106980919" watchObservedRunningTime="2025-11-22 00:07:55.819297139 +0000 UTC m=+111.107328330" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.826062 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.826100 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.826109 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.826124 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.826133 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.858620 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podStartSLOduration=90.858600102 podStartE2EDuration="1m30.858600102s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.858172829 +0000 UTC m=+111.146204010" watchObservedRunningTime="2025-11-22 00:07:55.858600102 +0000 UTC m=+111.146631293" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.887334 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xmk2r" podStartSLOduration=90.887310216 podStartE2EDuration="1m30.887310216s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.87231035 +0000 UTC m=+111.160341541" watchObservedRunningTime="2025-11-22 00:07:55.887310216 +0000 UTC m=+111.175341407" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.906416 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lwwmp" podStartSLOduration=90.906397607 podStartE2EDuration="1m30.906397607s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:55.891984987 +0000 UTC m=+111.180016178" watchObservedRunningTime="2025-11-22 00:07:55.906397607 +0000 UTC m=+111.194428798" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.928750 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.928821 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.928830 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.928844 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:55 crc kubenswrapper[4928]: I1122 00:07:55.928873 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:55Z","lastTransitionTime":"2025-11-22T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.030580 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.030614 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.030622 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.030635 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.030644 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.132757 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.132802 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.132810 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.132822 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.132831 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.235982 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.236030 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.236040 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.236055 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.236066 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.337991 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.338018 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.338025 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.338037 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.338046 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.440365 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.440416 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.440426 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.440439 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.440447 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.544286 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.544342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.544359 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.544381 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.544397 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.646265 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.646303 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.646312 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.646342 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.646353 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.749673 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.749753 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.749774 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.749832 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.749853 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.853214 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.853302 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.853325 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.853363 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.853385 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.956779 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.956904 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.956930 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.956959 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:56 crc kubenswrapper[4928]: I1122 00:07:56.956980 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:56Z","lastTransitionTime":"2025-11-22T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.059585 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.059646 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.059664 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.059689 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.059706 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:57Z","lastTransitionTime":"2025-11-22T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.161638 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.161686 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.161701 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.161721 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.161737 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:57Z","lastTransitionTime":"2025-11-22T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.264667 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.264747 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.264773 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.264857 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.264890 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:57Z","lastTransitionTime":"2025-11-22T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.367036 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.367083 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.367094 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.367112 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.367124 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:57Z","lastTransitionTime":"2025-11-22T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.470361 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.470429 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.470447 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.470470 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.470487 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:57Z","lastTransitionTime":"2025-11-22T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.573671 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.573732 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.573744 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.573763 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.573775 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:57Z","lastTransitionTime":"2025-11-22T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.617689 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.617765 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.617876 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.617659 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:57 crc kubenswrapper[4928]: E1122 00:07:57.618051 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:57 crc kubenswrapper[4928]: E1122 00:07:57.618301 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:57 crc kubenswrapper[4928]: E1122 00:07:57.618742 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:57 crc kubenswrapper[4928]: E1122 00:07:57.619179 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.633541 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.633573 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.633584 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.633598 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.633610 4928 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:07:57Z","lastTransitionTime":"2025-11-22T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.691122 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn"] Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.691608 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.693438 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.694215 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.694908 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.695648 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.810885 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21008417-b3ec-4d3d-b293-14a72e5fb97a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.810988 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21008417-b3ec-4d3d-b293-14a72e5fb97a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.811083 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21008417-b3ec-4d3d-b293-14a72e5fb97a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.811139 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/21008417-b3ec-4d3d-b293-14a72e5fb97a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.811213 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/21008417-b3ec-4d3d-b293-14a72e5fb97a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.912337 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/21008417-b3ec-4d3d-b293-14a72e5fb97a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.912437 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/21008417-b3ec-4d3d-b293-14a72e5fb97a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.912510 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21008417-b3ec-4d3d-b293-14a72e5fb97a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.912521 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/21008417-b3ec-4d3d-b293-14a72e5fb97a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.912547 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21008417-b3ec-4d3d-b293-14a72e5fb97a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.912663 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21008417-b3ec-4d3d-b293-14a72e5fb97a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.912659 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/21008417-b3ec-4d3d-b293-14a72e5fb97a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.914297 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21008417-b3ec-4d3d-b293-14a72e5fb97a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.919160 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21008417-b3ec-4d3d-b293-14a72e5fb97a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:57 crc kubenswrapper[4928]: I1122 00:07:57.933020 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21008417-b3ec-4d3d-b293-14a72e5fb97a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vfzqn\" (UID: \"21008417-b3ec-4d3d-b293-14a72e5fb97a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:58 crc kubenswrapper[4928]: I1122 00:07:58.014965 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" Nov 22 00:07:58 crc kubenswrapper[4928]: I1122 00:07:58.696966 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" event={"ID":"21008417-b3ec-4d3d-b293-14a72e5fb97a","Type":"ContainerStarted","Data":"4aba8bd08902f21ec31f804deb1acc7ea76b39d324e7616d199cd48fd8099170"} Nov 22 00:07:58 crc kubenswrapper[4928]: I1122 00:07:58.697345 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" event={"ID":"21008417-b3ec-4d3d-b293-14a72e5fb97a","Type":"ContainerStarted","Data":"f2d856a789f112569426261ba994a7ddee9932cb34e460fdf95e95282478de3f"} Nov 22 00:07:58 crc kubenswrapper[4928]: I1122 00:07:58.717729 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfzqn" podStartSLOduration=93.717698024 podStartE2EDuration="1m33.717698024s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:07:58.712374733 +0000 UTC m=+114.000405934" watchObservedRunningTime="2025-11-22 00:07:58.717698024 +0000 UTC m=+114.005729255" Nov 22 00:07:59 crc kubenswrapper[4928]: I1122 00:07:59.618002 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:07:59 crc kubenswrapper[4928]: I1122 00:07:59.618092 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:07:59 crc kubenswrapper[4928]: I1122 00:07:59.618092 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:07:59 crc kubenswrapper[4928]: I1122 00:07:59.618193 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:07:59 crc kubenswrapper[4928]: E1122 00:07:59.618190 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:07:59 crc kubenswrapper[4928]: E1122 00:07:59.618323 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:07:59 crc kubenswrapper[4928]: E1122 00:07:59.618398 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:07:59 crc kubenswrapper[4928]: E1122 00:07:59.618542 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:00 crc kubenswrapper[4928]: I1122 00:08:00.705587 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/1.log" Nov 22 00:08:00 crc kubenswrapper[4928]: I1122 00:08:00.706401 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/0.log" Nov 22 00:08:00 crc kubenswrapper[4928]: I1122 00:08:00.706474 4928 generic.go:334] "Generic (PLEG): container finished" podID="c474173e-56bf-467c-84e6-e473d89563c4" containerID="7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f" exitCode=1 Nov 22 00:08:00 crc kubenswrapper[4928]: I1122 00:08:00.706519 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerDied","Data":"7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f"} Nov 22 00:08:00 crc kubenswrapper[4928]: I1122 00:08:00.706568 4928 scope.go:117] "RemoveContainer" containerID="18d3703356e5ef47fa972881f3b0b82ca27cb8ebf49f88f3fe88d12d08c42355" Nov 22 00:08:00 crc kubenswrapper[4928]: I1122 00:08:00.707351 4928 scope.go:117] "RemoveContainer" containerID="7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f" Nov 22 00:08:00 crc kubenswrapper[4928]: E1122 00:08:00.707624 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xmk2r_openshift-multus(c474173e-56bf-467c-84e6-e473d89563c4)\"" pod="openshift-multus/multus-xmk2r" podUID="c474173e-56bf-467c-84e6-e473d89563c4" Nov 22 00:08:01 crc kubenswrapper[4928]: I1122 00:08:01.617596 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:01 crc kubenswrapper[4928]: I1122 00:08:01.617703 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:01 crc kubenswrapper[4928]: I1122 00:08:01.617707 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:01 crc kubenswrapper[4928]: E1122 00:08:01.617848 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:01 crc kubenswrapper[4928]: I1122 00:08:01.617610 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:01 crc kubenswrapper[4928]: E1122 00:08:01.618047 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:01 crc kubenswrapper[4928]: E1122 00:08:01.618240 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:01 crc kubenswrapper[4928]: E1122 00:08:01.618437 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:01 crc kubenswrapper[4928]: I1122 00:08:01.712665 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/1.log" Nov 22 00:08:03 crc kubenswrapper[4928]: I1122 00:08:03.618115 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:03 crc kubenswrapper[4928]: E1122 00:08:03.618239 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:03 crc kubenswrapper[4928]: I1122 00:08:03.618125 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:03 crc kubenswrapper[4928]: I1122 00:08:03.618543 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:03 crc kubenswrapper[4928]: I1122 00:08:03.618585 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:03 crc kubenswrapper[4928]: E1122 00:08:03.618673 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:03 crc kubenswrapper[4928]: E1122 00:08:03.618762 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:03 crc kubenswrapper[4928]: E1122 00:08:03.618945 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:05 crc kubenswrapper[4928]: E1122 00:08:05.604857 4928 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 22 00:08:05 crc kubenswrapper[4928]: I1122 00:08:05.617545 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:05 crc kubenswrapper[4928]: I1122 00:08:05.617615 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:05 crc kubenswrapper[4928]: I1122 00:08:05.617583 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:05 crc kubenswrapper[4928]: I1122 00:08:05.617559 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:05 crc kubenswrapper[4928]: E1122 00:08:05.618654 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:05 crc kubenswrapper[4928]: E1122 00:08:05.618774 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:05 crc kubenswrapper[4928]: E1122 00:08:05.618920 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:05 crc kubenswrapper[4928]: E1122 00:08:05.619002 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:05 crc kubenswrapper[4928]: E1122 00:08:05.737913 4928 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 00:08:06 crc kubenswrapper[4928]: I1122 00:08:06.619280 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.509939 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kx6zl"] Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.510569 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:07 crc kubenswrapper[4928]: E1122 00:08:07.510684 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.618335 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:07 crc kubenswrapper[4928]: E1122 00:08:07.618473 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.618336 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.618593 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:07 crc kubenswrapper[4928]: E1122 00:08:07.619120 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:07 crc kubenswrapper[4928]: E1122 00:08:07.619256 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.738286 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/3.log" Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.743569 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerStarted","Data":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.745442 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:08:07 crc kubenswrapper[4928]: I1122 00:08:07.792249 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podStartSLOduration=102.792232806 podStartE2EDuration="1m42.792232806s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:07.791968749 +0000 UTC m=+123.079999940" watchObservedRunningTime="2025-11-22 00:08:07.792232806 +0000 UTC m=+123.080263997" Nov 22 00:08:09 crc kubenswrapper[4928]: I1122 00:08:09.617597 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:09 crc kubenswrapper[4928]: I1122 00:08:09.617746 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:09 crc kubenswrapper[4928]: I1122 00:08:09.617640 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:09 crc kubenswrapper[4928]: I1122 00:08:09.617640 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:09 crc kubenswrapper[4928]: E1122 00:08:09.618050 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:09 crc kubenswrapper[4928]: E1122 00:08:09.618216 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:09 crc kubenswrapper[4928]: E1122 00:08:09.618374 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:09 crc kubenswrapper[4928]: E1122 00:08:09.618544 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:10 crc kubenswrapper[4928]: E1122 00:08:10.739698 4928 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 00:08:11 crc kubenswrapper[4928]: I1122 00:08:11.618099 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:11 crc kubenswrapper[4928]: I1122 00:08:11.618137 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:11 crc kubenswrapper[4928]: E1122 00:08:11.618294 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:11 crc kubenswrapper[4928]: I1122 00:08:11.618095 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:11 crc kubenswrapper[4928]: I1122 00:08:11.618389 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:11 crc kubenswrapper[4928]: E1122 00:08:11.618607 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:11 crc kubenswrapper[4928]: E1122 00:08:11.618669 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:11 crc kubenswrapper[4928]: E1122 00:08:11.618849 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:12 crc kubenswrapper[4928]: I1122 00:08:12.618481 4928 scope.go:117] "RemoveContainer" containerID="7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f" Nov 22 00:08:12 crc kubenswrapper[4928]: I1122 00:08:12.765174 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/1.log" Nov 22 00:08:13 crc kubenswrapper[4928]: I1122 00:08:13.617563 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:13 crc kubenswrapper[4928]: I1122 00:08:13.617658 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:13 crc kubenswrapper[4928]: I1122 00:08:13.617570 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:13 crc kubenswrapper[4928]: E1122 00:08:13.617762 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:13 crc kubenswrapper[4928]: I1122 00:08:13.617835 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:13 crc kubenswrapper[4928]: E1122 00:08:13.617950 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:13 crc kubenswrapper[4928]: E1122 00:08:13.618056 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:13 crc kubenswrapper[4928]: E1122 00:08:13.618140 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:13 crc kubenswrapper[4928]: I1122 00:08:13.773081 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/1.log" Nov 22 00:08:13 crc kubenswrapper[4928]: I1122 00:08:13.773178 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerStarted","Data":"64911518eaed9b948c61c7dbf99b4937d5bb5d2dc74889c3e69c591502f950ed"} Nov 22 00:08:15 crc kubenswrapper[4928]: I1122 00:08:15.618018 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:15 crc kubenswrapper[4928]: I1122 00:08:15.618149 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:15 crc kubenswrapper[4928]: I1122 00:08:15.618693 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:15 crc kubenswrapper[4928]: I1122 00:08:15.620528 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:15 crc kubenswrapper[4928]: E1122 00:08:15.620516 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 00:08:15 crc kubenswrapper[4928]: E1122 00:08:15.620722 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 00:08:15 crc kubenswrapper[4928]: E1122 00:08:15.620886 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 00:08:15 crc kubenswrapper[4928]: E1122 00:08:15.621029 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kx6zl" podUID="b3aa30f0-2a7f-4b60-b254-3e894a581f89" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.618028 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.618356 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.618523 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.619036 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.621093 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.621488 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.621858 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.621952 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.622196 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.622454 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 22 00:08:17 crc kubenswrapper[4928]: I1122 00:08:17.957690 4928 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.021674 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d7zcl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.023363 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.030284 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wqtln"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.031765 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.037291 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.037810 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.039591 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.040568 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.040627 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e80a65-5318-45b5-a1c3-cd137679d471-serving-cert\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.040654 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.040679 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-config\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.040710 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-service-ca-bundle\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.040753 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dk7\" (UniqueName: \"kubernetes.io/projected/f5e80a65-5318-45b5-a1c3-cd137679d471-kube-api-access-k8dk7\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.046853 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.047222 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kxr5c"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.047969 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.048033 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.048226 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.048333 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.048394 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.048622 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.048683 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.049313 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.049376 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.049465 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.049672 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.051983 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.052330 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.052128 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.058016 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.058530 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.058854 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.059662 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.059766 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.060114 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.060137 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29396160-qh9g4"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.060388 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8hklq"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.060741 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.061133 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.062305 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.062848 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.063108 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.063130 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.063299 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.062954 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.063764 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.063937 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.064257 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.065075 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.065732 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.066664 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.067099 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.067152 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.067588 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.067703 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.067717 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.069939 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.070145 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.070345 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.070372 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.070493 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.070638 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.073168 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc949"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.073879 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.084365 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fkl9p"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.084903 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n62d6"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.085219 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.085497 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.085513 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.085722 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.085845 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.085955 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.091558 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.092195 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.092588 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.092851 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.092869 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tmdvd"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.092964 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.092970 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.093163 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.093348 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zx7h8"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.093757 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.094312 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.094496 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.094623 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.094323 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.094414 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.094908 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.095560 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.095732 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.108048 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.108411 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.108655 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.109053 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.109258 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.109305 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.109667 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.109756 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.110376 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bttq9"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.112061 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bttq9" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.113421 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.114099 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.115065 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.121529 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.124625 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.137953 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.140616 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.140775 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.141190 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.141234 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e80a65-5318-45b5-a1c3-cd137679d471-serving-cert\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.141256 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-config\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.141272 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-service-ca-bundle\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.141299 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dk7\" (UniqueName: \"kubernetes.io/projected/f5e80a65-5318-45b5-a1c3-cd137679d471-kube-api-access-k8dk7\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.141309 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-25w2t"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.142014 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.143634 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-service-ca-bundle\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.143669 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-config\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.149783 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.150024 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.150130 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.150216 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.150462 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.150561 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.150740 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.150898 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151073 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151130 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151210 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151271 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151312 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151370 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151387 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151684 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151824 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.151938 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.152725 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.152915 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.153072 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.153848 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.154704 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.154883 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c64mx"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.155660 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.156062 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.159265 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.159439 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.159723 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.159859 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.159729 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.160206 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.161337 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.164709 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.165489 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rz49f"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.165877 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.166071 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.166310 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.166542 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.167045 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.167464 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.167549 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.167890 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.170725 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.172243 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.172680 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.173370 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.173692 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.174031 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.175269 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e80a65-5318-45b5-a1c3-cd137679d471-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.183404 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.184640 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.186208 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.188150 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e80a65-5318-45b5-a1c3-cd137679d471-serving-cert\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.189594 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.192330 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.192908 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.194295 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.195295 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.197861 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.199053 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.202578 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.199455 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.209845 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.210150 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.210402 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.210905 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.211266 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnm59"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.211540 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.211658 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212001 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212131 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54l8n"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212234 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212327 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212448 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lhcmt"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212535 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212552 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.212577 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.215152 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mp624"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.215643 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.215661 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.219327 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d7zcl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.219358 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dqtnp"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.219865 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.219920 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.221898 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wqtln"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.222088 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29396160-qh9g4"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.223157 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kxr5c"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.223373 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dk7\" (UniqueName: \"kubernetes.io/projected/f5e80a65-5318-45b5-a1c3-cd137679d471-kube-api-access-k8dk7\") pod \"authentication-operator-69f744f599-d7zcl\" (UID: \"f5e80a65-5318-45b5-a1c3-cd137679d471\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.224765 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.226038 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.228501 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zx7h8"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.228528 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.229414 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.231439 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fkl9p"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.232382 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-25w2t"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.233769 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8hklq"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.234415 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.240021 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241325 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bttq9"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241815 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a07cf7e-773c-46ea-989f-ec01540e81ee-config\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241841 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5x4x\" (UniqueName: \"kubernetes.io/projected/d29a2448-633a-472f-a072-849f5d90297e-kube-api-access-l5x4x\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241861 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a07cf7e-773c-46ea-989f-ec01540e81ee-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241876 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-oauth-serving-cert\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241893 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7rth\" (UniqueName: \"kubernetes.io/projected/ff645533-ce91-410f-9b30-134b6ab134f4-kube-api-access-t7rth\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241907 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8ph\" (UniqueName: \"kubernetes.io/projected/c10b9e4e-381b-425e-a1ce-3615a7580960-kube-api-access-kw8ph\") pod \"image-pruner-29396160-qh9g4\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241924 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-config\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241949 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7j5q\" (UniqueName: \"kubernetes.io/projected/a49e097b-7213-4398-800a-02a39ebc297e-kube-api-access-s7j5q\") pod \"downloads-7954f5f757-bttq9\" (UID: \"a49e097b-7213-4398-800a-02a39ebc297e\") " pod="openshift-console/downloads-7954f5f757-bttq9" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241964 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8fxm\" (UniqueName: \"kubernetes.io/projected/24972a91-2eb5-4dd5-9146-718e7fd61403-kube-api-access-l8fxm\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241981 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2mwr\" (UniqueName: \"kubernetes.io/projected/44a56f60-93be-4061-b8e9-36c22b3ef721-kube-api-access-n2mwr\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.241996 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-audit-policies\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242011 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-config\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242034 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242052 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-etcd-ca\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242068 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a56f60-93be-4061-b8e9-36c22b3ef721-serving-cert\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242091 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgph\" (UniqueName: \"kubernetes.io/projected/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-kube-api-access-njgph\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242108 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242124 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d99453e-a6fd-4903-9889-0f2229a26fbf-audit-dir\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242151 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff645533-ce91-410f-9b30-134b6ab134f4-config\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242169 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b36deea-9635-49f8-862a-b9067233fd24-serving-cert\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242185 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29a2448-633a-472f-a072-849f5d90297e-serving-cert\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242201 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2f0c943c-e71b-49f1-bce1-4d5bebec0bce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-25w2t\" (UID: \"2f0c943c-e71b-49f1-bce1-4d5bebec0bce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242215 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-etcd-client\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242231 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242246 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242262 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242280 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a07cf7e-773c-46ea-989f-ec01540e81ee-images\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242295 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242311 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-oauth-config\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242333 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242362 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-serving-cert\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242377 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-serving-cert\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242392 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht2k4\" (UniqueName: \"kubernetes.io/projected/4a07cf7e-773c-46ea-989f-ec01540e81ee-kube-api-access-ht2k4\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242407 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-serving-cert\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242423 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ff645533-ce91-410f-9b30-134b6ab134f4-machine-approver-tls\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242437 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242453 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-policies\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242467 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wdpp\" (UniqueName: \"kubernetes.io/projected/42bd19a9-be1a-4c00-bf19-536e55a1ae11-kube-api-access-7wdpp\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242481 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242527 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44a56f60-93be-4061-b8e9-36c22b3ef721-trusted-ca\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242541 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d99453e-a6fd-4903-9889-0f2229a26fbf-node-pullsecrets\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242557 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67b8\" (UniqueName: \"kubernetes.io/projected/5b36deea-9635-49f8-862a-b9067233fd24-kube-api-access-f67b8\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242573 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqk9\" (UniqueName: \"kubernetes.io/projected/2f0c943c-e71b-49f1-bce1-4d5bebec0bce-kube-api-access-zsqk9\") pod \"multus-admission-controller-857f4d67dd-25w2t\" (UID: \"2f0c943c-e71b-49f1-bce1-4d5bebec0bce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242589 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-trusted-ca-bundle\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242604 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-config\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242619 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d29a2448-633a-472f-a072-849f5d90297e-etcd-client\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242635 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242651 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242674 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcz7\" (UniqueName: \"kubernetes.io/projected/ef1ace38-76ed-4913-a9ac-8954a34820ae-kube-api-access-wzcz7\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242689 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-etcd-serving-ca\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242704 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-image-import-ca\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242720 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242735 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfblj\" (UniqueName: \"kubernetes.io/projected/0d99453e-a6fd-4903-9889-0f2229a26fbf-kube-api-access-qfblj\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242758 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c10b9e4e-381b-425e-a1ce-3615a7580960-serviceca\") pod \"image-pruner-29396160-qh9g4\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242772 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242804 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24972a91-2eb5-4dd5-9146-718e7fd61403-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242819 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-etcd-client\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242834 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-etcd-service-ca\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242848 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242862 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-service-ca\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242875 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-config\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242888 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-audit\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242905 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242918 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-encryption-config\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242942 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24972a91-2eb5-4dd5-9146-718e7fd61403-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242959 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242974 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxqql\" (UniqueName: \"kubernetes.io/projected/a5945d14-c3fa-42ae-91c3-64fe566c1b20-kube-api-access-hxqql\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.242988 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-client-ca\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.243002 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a56f60-93be-4061-b8e9-36c22b3ef721-config\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.243017 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.243031 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef1ace38-76ed-4913-a9ac-8954a34820ae-audit-dir\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.243047 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff645533-ce91-410f-9b30-134b6ab134f4-auth-proxy-config\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.243061 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-dir\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.243074 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-encryption-config\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.243787 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.246261 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vq7f7"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.246977 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.248311 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lhcmt"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.248999 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.251230 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc949"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.252358 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.253136 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tmdvd"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.254837 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n62d6"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.255967 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.257158 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.258113 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.259186 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.260101 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.261034 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.262770 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.266427 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.273236 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.276737 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.279190 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mp624"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.282599 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c64mx"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.283910 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.285183 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54l8n"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.285941 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.287087 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.288161 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnm59"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.289214 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dqtnp"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.290422 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.291541 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-87wpn"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.292632 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.292748 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-87wpn"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.306391 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.329621 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.331243 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fxj5"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.335257 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.341513 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fxj5"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343570 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-etcd-service-ca\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343596 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343616 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-service-ca\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343633 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-config\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343650 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-audit\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343666 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343684 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-encryption-config\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343709 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24972a91-2eb5-4dd5-9146-718e7fd61403-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343723 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343737 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxqql\" (UniqueName: \"kubernetes.io/projected/a5945d14-c3fa-42ae-91c3-64fe566c1b20-kube-api-access-hxqql\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343753 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-client-ca\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343766 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a56f60-93be-4061-b8e9-36c22b3ef721-config\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343781 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343808 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef1ace38-76ed-4913-a9ac-8954a34820ae-audit-dir\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343825 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff645533-ce91-410f-9b30-134b6ab134f4-auth-proxy-config\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343840 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-dir\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343855 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-encryption-config\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343871 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a07cf7e-773c-46ea-989f-ec01540e81ee-config\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343888 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5x4x\" (UniqueName: \"kubernetes.io/projected/d29a2448-633a-472f-a072-849f5d90297e-kube-api-access-l5x4x\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343904 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a07cf7e-773c-46ea-989f-ec01540e81ee-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343918 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-oauth-serving-cert\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343936 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7rth\" (UniqueName: \"kubernetes.io/projected/ff645533-ce91-410f-9b30-134b6ab134f4-kube-api-access-t7rth\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343953 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8ph\" (UniqueName: \"kubernetes.io/projected/c10b9e4e-381b-425e-a1ce-3615a7580960-kube-api-access-kw8ph\") pod \"image-pruner-29396160-qh9g4\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343970 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-config\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.343984 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7j5q\" (UniqueName: \"kubernetes.io/projected/a49e097b-7213-4398-800a-02a39ebc297e-kube-api-access-s7j5q\") pod \"downloads-7954f5f757-bttq9\" (UID: \"a49e097b-7213-4398-800a-02a39ebc297e\") " pod="openshift-console/downloads-7954f5f757-bttq9" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344001 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8fxm\" (UniqueName: \"kubernetes.io/projected/24972a91-2eb5-4dd5-9146-718e7fd61403-kube-api-access-l8fxm\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344021 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2mwr\" (UniqueName: \"kubernetes.io/projected/44a56f60-93be-4061-b8e9-36c22b3ef721-kube-api-access-n2mwr\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344037 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-audit-policies\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344051 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-config\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344066 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344082 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-etcd-ca\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344098 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a56f60-93be-4061-b8e9-36c22b3ef721-serving-cert\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344113 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgph\" (UniqueName: \"kubernetes.io/projected/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-kube-api-access-njgph\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344127 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344142 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d99453e-a6fd-4903-9889-0f2229a26fbf-audit-dir\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344162 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff645533-ce91-410f-9b30-134b6ab134f4-config\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344178 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b36deea-9635-49f8-862a-b9067233fd24-serving-cert\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344192 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29a2448-633a-472f-a072-849f5d90297e-serving-cert\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344212 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2f0c943c-e71b-49f1-bce1-4d5bebec0bce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-25w2t\" (UID: \"2f0c943c-e71b-49f1-bce1-4d5bebec0bce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344232 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-etcd-client\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344246 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344263 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344287 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344302 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a07cf7e-773c-46ea-989f-ec01540e81ee-images\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344320 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344340 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-oauth-config\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344370 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344386 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-serving-cert\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344404 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-serving-cert\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344427 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht2k4\" (UniqueName: \"kubernetes.io/projected/4a07cf7e-773c-46ea-989f-ec01540e81ee-kube-api-access-ht2k4\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344466 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-serving-cert\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344485 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ff645533-ce91-410f-9b30-134b6ab134f4-machine-approver-tls\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344501 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344518 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-policies\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344534 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wdpp\" (UniqueName: \"kubernetes.io/projected/42bd19a9-be1a-4c00-bf19-536e55a1ae11-kube-api-access-7wdpp\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344555 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344575 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44a56f60-93be-4061-b8e9-36c22b3ef721-trusted-ca\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344590 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d99453e-a6fd-4903-9889-0f2229a26fbf-node-pullsecrets\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344607 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67b8\" (UniqueName: \"kubernetes.io/projected/5b36deea-9635-49f8-862a-b9067233fd24-kube-api-access-f67b8\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344623 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqk9\" (UniqueName: \"kubernetes.io/projected/2f0c943c-e71b-49f1-bce1-4d5bebec0bce-kube-api-access-zsqk9\") pod \"multus-admission-controller-857f4d67dd-25w2t\" (UID: \"2f0c943c-e71b-49f1-bce1-4d5bebec0bce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344638 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-trusted-ca-bundle\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344653 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-config\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344667 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d29a2448-633a-472f-a072-849f5d90297e-etcd-client\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344683 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344698 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344726 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcz7\" (UniqueName: \"kubernetes.io/projected/ef1ace38-76ed-4913-a9ac-8954a34820ae-kube-api-access-wzcz7\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344749 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-etcd-serving-ca\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344770 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-image-import-ca\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344825 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344843 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfblj\" (UniqueName: \"kubernetes.io/projected/0d99453e-a6fd-4903-9889-0f2229a26fbf-kube-api-access-qfblj\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344897 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c10b9e4e-381b-425e-a1ce-3615a7580960-serviceca\") pod \"image-pruner-29396160-qh9g4\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344913 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344929 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24972a91-2eb5-4dd5-9146-718e7fd61403-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.344948 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-etcd-client\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.347279 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.347389 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0d99453e-a6fd-4903-9889-0f2229a26fbf-node-pullsecrets\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.347408 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-etcd-service-ca\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.348344 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-config\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.348486 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-trusted-ca-bundle\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.348701 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44a56f60-93be-4061-b8e9-36c22b3ef721-trusted-ca\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.349540 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.350047 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.350572 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.350686 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a07cf7e-773c-46ea-989f-ec01540e81ee-images\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.352227 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef1ace38-76ed-4913-a9ac-8954a34820ae-audit-dir\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.352528 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-dir\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.353855 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.353897 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-service-ca\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.354225 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29a2448-633a-472f-a072-849f5d90297e-serving-cert\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.354281 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d99453e-a6fd-4903-9889-0f2229a26fbf-audit-dir\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.354344 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-config\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.354480 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2f0c943c-e71b-49f1-bce1-4d5bebec0bce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-25w2t\" (UID: \"2f0c943c-e71b-49f1-bce1-4d5bebec0bce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.354852 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.355329 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a56f60-93be-4061-b8e9-36c22b3ef721-config\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.355977 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a56f60-93be-4061-b8e9-36c22b3ef721-serving-cert\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.356416 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.358275 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ff645533-ce91-410f-9b30-134b6ab134f4-machine-approver-tls\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.358536 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d29a2448-633a-472f-a072-849f5d90297e-etcd-client\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.359099 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.359451 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-policies\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.359863 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.367151 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.369753 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.370019 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c10b9e4e-381b-425e-a1ce-3615a7580960-serviceca\") pod \"image-pruner-29396160-qh9g4\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.370260 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-etcd-client\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.370985 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.371029 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b36deea-9635-49f8-862a-b9067233fd24-serving-cert\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.371117 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-client-ca\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.371917 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24972a91-2eb5-4dd5-9146-718e7fd61403-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.372346 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-serving-cert\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.372464 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-audit\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.372955 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a07cf7e-773c-46ea-989f-ec01540e81ee-config\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.373346 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-etcd-client\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.373433 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-serving-cert\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.373584 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-oauth-config\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.373704 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-etcd-serving-ca\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.373770 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-audit-policies\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.374000 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ef1ace38-76ed-4913-a9ac-8954a34820ae-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.374238 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-serving-cert\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.374547 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-config\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.374610 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0d99453e-a6fd-4903-9889-0f2229a26fbf-image-import-ca\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.374967 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.375178 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.375593 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-console-config\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.375684 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d29a2448-633a-472f-a072-849f5d90297e-etcd-ca\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.375732 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5945d14-c3fa-42ae-91c3-64fe566c1b20-oauth-serving-cert\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.380022 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff645533-ce91-410f-9b30-134b6ab134f4-config\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.383553 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.383950 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.384091 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff645533-ce91-410f-9b30-134b6ab134f4-auth-proxy-config\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.384372 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24972a91-2eb5-4dd5-9146-718e7fd61403-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.384529 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ef1ace38-76ed-4913-a9ac-8954a34820ae-encryption-config\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.385193 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a07cf7e-773c-46ea-989f-ec01540e81ee-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.385813 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d99453e-a6fd-4903-9889-0f2229a26fbf-encryption-config\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.386717 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.407471 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.426104 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.464662 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.469134 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.487937 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.506084 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.526461 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.546248 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.566037 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d7zcl"] Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.566858 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.586895 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.606988 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.628056 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.646767 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.668036 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.686121 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.706101 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.725469 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.746578 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.766734 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.786629 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.791440 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" event={"ID":"f5e80a65-5318-45b5-a1c3-cd137679d471","Type":"ContainerStarted","Data":"64ba5043695b0dd49f3076f04143651ea0a103e327d2eb5fe7124893b5a8ec02"} Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.791494 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" event={"ID":"f5e80a65-5318-45b5-a1c3-cd137679d471","Type":"ContainerStarted","Data":"bd369f30864bb63b2b2333148a685b35f5bdd107a8d08377bc0370919365f9b4"} Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.807108 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.837021 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.846260 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.867354 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.886738 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.907146 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.926665 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.946293 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.967385 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 22 00:08:18 crc kubenswrapper[4928]: I1122 00:08:18.986625 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.006845 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.026250 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.046656 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.066059 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.087492 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.106238 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.127466 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.147970 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.167087 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.187032 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.204416 4928 request.go:700] Waited for 1.006271001s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcollect-profiles-dockercfg-kzf4t&limit=500&resourceVersion=0 Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.206953 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.226636 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.246914 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.267106 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.286529 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.306637 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.325491 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.347314 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.366528 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.387181 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.405212 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.427334 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.446636 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.475579 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.485693 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.505992 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.526620 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.546576 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.566157 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.586389 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.606265 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.641284 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.646604 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.666114 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.687126 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.705923 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.726587 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.746184 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.770465 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.786259 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.806552 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.825365 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.845865 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.865402 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.885582 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.906224 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.925661 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.948133 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.965993 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 22 00:08:19 crc kubenswrapper[4928]: I1122 00:08:19.985424 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.006407 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.026268 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.045999 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.066031 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.086050 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.106717 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.126538 4928 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.163856 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67b8\" (UniqueName: \"kubernetes.io/projected/5b36deea-9635-49f8-862a-b9067233fd24-kube-api-access-f67b8\") pod \"route-controller-manager-6576b87f9c-87wq5\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.184463 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqk9\" (UniqueName: \"kubernetes.io/projected/2f0c943c-e71b-49f1-bce1-4d5bebec0bce-kube-api-access-zsqk9\") pod \"multus-admission-controller-857f4d67dd-25w2t\" (UID: \"2f0c943c-e71b-49f1-bce1-4d5bebec0bce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.204407 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.204424 4928 request.go:700] Waited for 1.853789343s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.224120 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht2k4\" (UniqueName: \"kubernetes.io/projected/4a07cf7e-773c-46ea-989f-ec01540e81ee-kube-api-access-ht2k4\") pod \"machine-api-operator-5694c8668f-wqtln\" (UID: \"4a07cf7e-773c-46ea-989f-ec01540e81ee\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.241216 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.268600 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcz7\" (UniqueName: \"kubernetes.io/projected/ef1ace38-76ed-4913-a9ac-8954a34820ae-kube-api-access-wzcz7\") pod \"apiserver-7bbb656c7d-7x8dk\" (UID: \"ef1ace38-76ed-4913-a9ac-8954a34820ae\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.280419 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294bb8b1-6c6b-45a6-aa39-33e662ff511d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.280476 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.280504 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rgn8z\" (UID: \"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.280527 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/294bb8b1-6c6b-45a6-aa39-33e662ff511d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.280608 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-bound-sa-token\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.280631 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/670d4817-7813-4326-9eb3-d3319ecc7ae2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.280828 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-serving-cert\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.280983 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:20.780965683 +0000 UTC m=+136.068996884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281012 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/670d4817-7813-4326-9eb3-d3319ecc7ae2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281054 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dd9d\" (UniqueName: \"kubernetes.io/projected/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-kube-api-access-5dd9d\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281077 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppzf\" (UniqueName: \"kubernetes.io/projected/9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3-kube-api-access-zppzf\") pod \"cluster-samples-operator-665b6dd947-rgn8z\" (UID: \"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281133 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-trusted-ca\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281324 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281428 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-tls\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281495 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-certificates\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281560 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p266w\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-kube-api-access-p266w\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.281649 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7m8h\" (UniqueName: \"kubernetes.io/projected/294bb8b1-6c6b-45a6-aa39-33e662ff511d-kube-api-access-v7m8h\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.298166 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wdpp\" (UniqueName: \"kubernetes.io/projected/42bd19a9-be1a-4c00-bf19-536e55a1ae11-kube-api-access-7wdpp\") pod \"oauth-openshift-558db77b4-tmdvd\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.311025 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7rth\" (UniqueName: \"kubernetes.io/projected/ff645533-ce91-410f-9b30-134b6ab134f4-kube-api-access-t7rth\") pod \"machine-approver-56656f9798-fgph7\" (UID: \"ff645533-ce91-410f-9b30-134b6ab134f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.327869 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8ph\" (UniqueName: \"kubernetes.io/projected/c10b9e4e-381b-425e-a1ce-3615a7580960-kube-api-access-kw8ph\") pod \"image-pruner-29396160-qh9g4\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.345907 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5x4x\" (UniqueName: \"kubernetes.io/projected/d29a2448-633a-472f-a072-849f5d90297e-kube-api-access-l5x4x\") pod \"etcd-operator-b45778765-zx7h8\" (UID: \"d29a2448-633a-472f-a072-849f5d90297e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.356129 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.362625 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.376920 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7j5q\" (UniqueName: \"kubernetes.io/projected/a49e097b-7213-4398-800a-02a39ebc297e-kube-api-access-s7j5q\") pod \"downloads-7954f5f757-bttq9\" (UID: \"a49e097b-7213-4398-800a-02a39ebc297e\") " pod="openshift-console/downloads-7954f5f757-bttq9" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.382366 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.382591 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:20.882556233 +0000 UTC m=+136.170587434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.382661 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-socket-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.382700 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-bound-sa-token\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.382722 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72b08661-098c-44a0-82f1-7b1277bc65ba-cert\") pod \"ingress-canary-87wpn\" (UID: \"72b08661-098c-44a0-82f1-7b1277bc65ba\") " pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.382747 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8235f10d-349e-42ab-93e3-ed056e36bca4-tmpfs\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.382769 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8235f10d-349e-42ab-93e3-ed056e36bca4-apiservice-cert\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.382851 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7707cfc4-56b4-4b94-8383-954071ae4bd4-config\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.383033 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6416d870-85ef-468a-a9a2-b2429b06a6d3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.383066 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f50294b-9edd-469f-8873-aeb445d7653b-serving-cert\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.383583 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/670d4817-7813-4326-9eb3-d3319ecc7ae2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.383914 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47c44a11-8d04-4a95-9ce0-6436dbf6a035-profile-collector-cert\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.384129 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/670d4817-7813-4326-9eb3-d3319ecc7ae2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.384406 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ac57bc-5d54-4f2c-83bc-8068eead3c89-config\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.385477 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cbe72dc-6cc5-48bb-921d-17466274c88a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.385514 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrth\" (UniqueName: \"kubernetes.io/projected/db624f52-ab96-42e2-a54c-62c2abc36274-kube-api-access-sdrth\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.385537 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8fg\" (UniqueName: \"kubernetes.io/projected/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-kube-api-access-gg8fg\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.385571 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec9c6599-a073-4b08-9e39-20250709a458-metrics-tls\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.385643 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f6c2195-9090-43be-bc5a-802f7e7256d8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chsjj\" (UID: \"7f6c2195-9090-43be-bc5a-802f7e7256d8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.385668 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-images\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.385712 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-default-certificate\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.386360 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.386585 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-trusted-ca\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.386615 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.386656 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8h6l\" (UniqueName: \"kubernetes.io/projected/35bda3e7-0e70-4bfc-bd43-29fef60761d2-kube-api-access-z8h6l\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.386678 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xb2\" (UniqueName: \"kubernetes.io/projected/47c44a11-8d04-4a95-9ce0-6436dbf6a035-kube-api-access-z6xb2\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.387396 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-metrics-certs\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.387476 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35bda3e7-0e70-4bfc-bd43-29fef60761d2-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.387773 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.387837 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc2f5\" (UniqueName: \"kubernetes.io/projected/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-kube-api-access-fc2f5\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388177 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388222 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-trusted-ca\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388228 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8480834f-a7e6-4459-a2d2-73b968249ce7-secret-volume\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388266 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvm6\" (UniqueName: \"kubernetes.io/projected/6416d870-85ef-468a-a9a2-b2429b06a6d3-kube-api-access-5nvm6\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388293 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-tls\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388313 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p266w\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-kube-api-access-p266w\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388348 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef7a6ad-8a53-415c-b18e-098c96f5c630-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7xwgw\" (UID: \"cef7a6ad-8a53-415c-b18e-098c96f5c630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388372 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d46a6b01-c494-4912-8b32-a496ef03461a-signing-key\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388430 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cbe72dc-6cc5-48bb-921d-17466274c88a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388450 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-stats-auth\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388475 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-node-bootstrap-token\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388494 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62ac57bc-5d54-4f2c-83bc-8068eead3c89-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388513 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7707cfc4-56b4-4b94-8383-954071ae4bd4-serving-cert\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388572 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rgn8z\" (UID: \"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388594 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388627 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/294bb8b1-6c6b-45a6-aa39-33e662ff511d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388652 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjqs\" (UniqueName: \"kubernetes.io/projected/72b08661-098c-44a0-82f1-7b1277bc65ba-kube-api-access-nmjqs\") pod \"ingress-canary-87wpn\" (UID: \"72b08661-098c-44a0-82f1-7b1277bc65ba\") " pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388674 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388697 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86tf8\" (UniqueName: \"kubernetes.io/projected/8235f10d-349e-42ab-93e3-ed056e36bca4-kube-api-access-86tf8\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388717 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-config\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388736 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d46a6b01-c494-4912-8b32-a496ef03461a-signing-cabundle\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388766 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq48r\" (UniqueName: \"kubernetes.io/projected/05f6d2a7-33e5-42fe-8d92-bdb258597a56-kube-api-access-fq48r\") pod \"migrator-59844c95c7-j72fl\" (UID: \"05f6d2a7-33e5-42fe-8d92-bdb258597a56\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388826 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6416d870-85ef-468a-a9a2-b2429b06a6d3-proxy-tls\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388856 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg42k\" (UniqueName: \"kubernetes.io/projected/cef7a6ad-8a53-415c-b18e-098c96f5c630-kube-api-access-tg42k\") pod \"control-plane-machine-set-operator-78cbb6b69f-7xwgw\" (UID: \"cef7a6ad-8a53-415c-b18e-098c96f5c630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388880 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8235f10d-349e-42ab-93e3-ed056e36bca4-webhook-cert\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388948 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388972 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47c44a11-8d04-4a95-9ce0-6436dbf6a035-srv-cert\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.388996 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-serving-cert\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389048 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/35bda3e7-0e70-4bfc-bd43-29fef60761d2-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389076 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2qxn\" (UniqueName: \"kubernetes.io/projected/7707cfc4-56b4-4b94-8383-954071ae4bd4-kube-api-access-t2qxn\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389104 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjcx\" (UniqueName: \"kubernetes.io/projected/13f49211-e6af-49ed-823f-d1b54d73ac0b-kube-api-access-csjcx\") pod \"dns-operator-744455d44c-c64mx\" (UID: \"13f49211-e6af-49ed-823f-d1b54d73ac0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389132 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/670d4817-7813-4326-9eb3-d3319ecc7ae2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389158 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58g2\" (UniqueName: \"kubernetes.io/projected/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-kube-api-access-z58g2\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389183 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f49211-e6af-49ed-823f-d1b54d73ac0b-metrics-tls\") pod \"dns-operator-744455d44c-c64mx\" (UID: \"13f49211-e6af-49ed-823f-d1b54d73ac0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389207 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/ec9c6599-a073-4b08-9e39-20250709a458-kube-api-access-n2mxv\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389246 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-registration-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389269 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389289 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389307 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-plugins-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389340 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dd9d\" (UniqueName: \"kubernetes.io/projected/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-kube-api-access-5dd9d\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389360 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppzf\" (UniqueName: \"kubernetes.io/projected/9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3-kube-api-access-zppzf\") pod \"cluster-samples-operator-665b6dd947-rgn8z\" (UID: \"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389406 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp2wf\" (UniqueName: \"kubernetes.io/projected/4f50294b-9edd-469f-8873-aeb445d7653b-kube-api-access-hp2wf\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389436 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389459 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-proxy-tls\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389494 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwkbn\" (UniqueName: \"kubernetes.io/projected/d46a6b01-c494-4912-8b32-a496ef03461a-kube-api-access-qwkbn\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389515 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-client-ca\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389554 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcthd\" (UniqueName: \"kubernetes.io/projected/4765c779-803b-42f9-9598-c91eec1b5e9e-kube-api-access-rcthd\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389611 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ac57bc-5d54-4f2c-83bc-8068eead3c89-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389633 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-srv-cert\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389652 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4765c779-803b-42f9-9598-c91eec1b5e9e-service-ca-bundle\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389696 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7m8h\" (UniqueName: \"kubernetes.io/projected/294bb8b1-6c6b-45a6-aa39-33e662ff511d-kube-api-access-v7m8h\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389750 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-csi-data-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389882 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-certificates\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389958 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.389996 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-mountpoint-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390098 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390138 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prk9b\" (UniqueName: \"kubernetes.io/projected/7f6c2195-9090-43be-bc5a-802f7e7256d8-kube-api-access-prk9b\") pod \"package-server-manager-789f6589d5-chsjj\" (UID: \"7f6c2195-9090-43be-bc5a-802f7e7256d8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390175 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec9c6599-a073-4b08-9e39-20250709a458-config-volume\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390208 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmftb\" (UniqueName: \"kubernetes.io/projected/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-kube-api-access-qmftb\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390278 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294bb8b1-6c6b-45a6-aa39-33e662ff511d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390311 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35bda3e7-0e70-4bfc-bd43-29fef60761d2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390392 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390429 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cbe72dc-6cc5-48bb-921d-17466274c88a-config\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390554 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8480834f-a7e6-4459-a2d2-73b968249ce7-config-volume\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390589 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzf4w\" (UniqueName: \"kubernetes.io/projected/8480834f-a7e6-4459-a2d2-73b968249ce7-kube-api-access-kzf4w\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390722 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-certs\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.390782 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppd4m\" (UniqueName: \"kubernetes.io/projected/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-kube-api-access-ppd4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.392361 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-tls\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.394672 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/670d4817-7813-4326-9eb3-d3319ecc7ae2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.397520 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/294bb8b1-6c6b-45a6-aa39-33e662ff511d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.398073 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rgn8z\" (UID: \"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.399419 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:20.899395916 +0000 UTC m=+136.187427147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.399493 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-certificates\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.400083 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-serving-cert\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.400513 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294bb8b1-6c6b-45a6-aa39-33e662ff511d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.400688 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.434610 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8fxm\" (UniqueName: \"kubernetes.io/projected/24972a91-2eb5-4dd5-9146-718e7fd61403-kube-api-access-l8fxm\") pod \"openshift-apiserver-operator-796bbdcf4f-g8hz8\" (UID: \"24972a91-2eb5-4dd5-9146-718e7fd61403\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.435715 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2mwr\" (UniqueName: \"kubernetes.io/projected/44a56f60-93be-4061-b8e9-36c22b3ef721-kube-api-access-n2mwr\") pod \"console-operator-58897d9998-8hklq\" (UID: \"44a56f60-93be-4061-b8e9-36c22b3ef721\") " pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.436071 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfblj\" (UniqueName: \"kubernetes.io/projected/0d99453e-a6fd-4903-9889-0f2229a26fbf-kube-api-access-qfblj\") pod \"apiserver-76f77b778f-kxr5c\" (UID: \"0d99453e-a6fd-4903-9889-0f2229a26fbf\") " pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.452761 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxqql\" (UniqueName: \"kubernetes.io/projected/a5945d14-c3fa-42ae-91c3-64fe566c1b20-kube-api-access-hxqql\") pod \"console-f9d7485db-fkl9p\" (UID: \"a5945d14-c3fa-42ae-91c3-64fe566c1b20\") " pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.467294 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.473351 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgph\" (UniqueName: \"kubernetes.io/projected/4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37-kube-api-access-njgph\") pod \"cluster-image-registry-operator-dc59b4c8b-5c8bn\" (UID: \"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.487751 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.496638 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.496844 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:20.996813039 +0000 UTC m=+136.284844250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.496895 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8480834f-a7e6-4459-a2d2-73b968249ce7-config-volume\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.496937 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzf4w\" (UniqueName: \"kubernetes.io/projected/8480834f-a7e6-4459-a2d2-73b968249ce7-kube-api-access-kzf4w\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.496972 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppd4m\" (UniqueName: \"kubernetes.io/projected/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-kube-api-access-ppd4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.497001 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-certs\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.497031 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-socket-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.497056 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72b08661-098c-44a0-82f1-7b1277bc65ba-cert\") pod \"ingress-canary-87wpn\" (UID: \"72b08661-098c-44a0-82f1-7b1277bc65ba\") " pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.498297 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8235f10d-349e-42ab-93e3-ed056e36bca4-tmpfs\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.498330 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8235f10d-349e-42ab-93e3-ed056e36bca4-apiservice-cert\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.498361 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7707cfc4-56b4-4b94-8383-954071ae4bd4-config\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.498402 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f50294b-9edd-469f-8873-aeb445d7653b-serving-cert\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.498440 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6416d870-85ef-468a-a9a2-b2429b06a6d3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.498229 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8480834f-a7e6-4459-a2d2-73b968249ce7-config-volume\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.497508 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-socket-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499263 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7707cfc4-56b4-4b94-8383-954071ae4bd4-config\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499298 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8235f10d-349e-42ab-93e3-ed056e36bca4-tmpfs\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499314 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47c44a11-8d04-4a95-9ce0-6436dbf6a035-profile-collector-cert\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499342 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ac57bc-5d54-4f2c-83bc-8068eead3c89-config\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499729 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cbe72dc-6cc5-48bb-921d-17466274c88a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499756 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8fg\" (UniqueName: \"kubernetes.io/projected/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-kube-api-access-gg8fg\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499780 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec9c6599-a073-4b08-9e39-20250709a458-metrics-tls\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499820 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdrth\" (UniqueName: \"kubernetes.io/projected/db624f52-ab96-42e2-a54c-62c2abc36274-kube-api-access-sdrth\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499852 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f6c2195-9090-43be-bc5a-802f7e7256d8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chsjj\" (UID: \"7f6c2195-9090-43be-bc5a-802f7e7256d8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499876 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-default-certificate\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499898 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-images\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499921 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499945 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499967 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8h6l\" (UniqueName: \"kubernetes.io/projected/35bda3e7-0e70-4bfc-bd43-29fef60761d2-kube-api-access-z8h6l\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.499992 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xb2\" (UniqueName: \"kubernetes.io/projected/47c44a11-8d04-4a95-9ce0-6436dbf6a035-kube-api-access-z6xb2\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500015 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-metrics-certs\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500037 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35bda3e7-0e70-4bfc-bd43-29fef60761d2-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500059 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc2f5\" (UniqueName: \"kubernetes.io/projected/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-kube-api-access-fc2f5\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500081 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8480834f-a7e6-4459-a2d2-73b968249ce7-secret-volume\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500102 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nvm6\" (UniqueName: \"kubernetes.io/projected/6416d870-85ef-468a-a9a2-b2429b06a6d3-kube-api-access-5nvm6\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500136 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef7a6ad-8a53-415c-b18e-098c96f5c630-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7xwgw\" (UID: \"cef7a6ad-8a53-415c-b18e-098c96f5c630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500161 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d46a6b01-c494-4912-8b32-a496ef03461a-signing-key\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500182 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-stats-auth\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500208 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cbe72dc-6cc5-48bb-921d-17466274c88a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500231 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62ac57bc-5d54-4f2c-83bc-8068eead3c89-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500254 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7707cfc4-56b4-4b94-8383-954071ae4bd4-serving-cert\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500279 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-node-bootstrap-token\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500345 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500374 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmjqs\" (UniqueName: \"kubernetes.io/projected/72b08661-098c-44a0-82f1-7b1277bc65ba-kube-api-access-nmjqs\") pod \"ingress-canary-87wpn\" (UID: \"72b08661-098c-44a0-82f1-7b1277bc65ba\") " pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500399 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500423 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86tf8\" (UniqueName: \"kubernetes.io/projected/8235f10d-349e-42ab-93e3-ed056e36bca4-kube-api-access-86tf8\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500448 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-config\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500451 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6416d870-85ef-468a-a9a2-b2429b06a6d3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500475 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d46a6b01-c494-4912-8b32-a496ef03461a-signing-cabundle\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500498 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq48r\" (UniqueName: \"kubernetes.io/projected/05f6d2a7-33e5-42fe-8d92-bdb258597a56-kube-api-access-fq48r\") pod \"migrator-59844c95c7-j72fl\" (UID: \"05f6d2a7-33e5-42fe-8d92-bdb258597a56\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500528 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6416d870-85ef-468a-a9a2-b2429b06a6d3-proxy-tls\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500553 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg42k\" (UniqueName: \"kubernetes.io/projected/cef7a6ad-8a53-415c-b18e-098c96f5c630-kube-api-access-tg42k\") pod \"control-plane-machine-set-operator-78cbb6b69f-7xwgw\" (UID: \"cef7a6ad-8a53-415c-b18e-098c96f5c630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500578 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8235f10d-349e-42ab-93e3-ed056e36bca4-webhook-cert\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500615 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500649 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47c44a11-8d04-4a95-9ce0-6436dbf6a035-srv-cert\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500679 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2qxn\" (UniqueName: \"kubernetes.io/projected/7707cfc4-56b4-4b94-8383-954071ae4bd4-kube-api-access-t2qxn\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500706 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csjcx\" (UniqueName: \"kubernetes.io/projected/13f49211-e6af-49ed-823f-d1b54d73ac0b-kube-api-access-csjcx\") pod \"dns-operator-744455d44c-c64mx\" (UID: \"13f49211-e6af-49ed-823f-d1b54d73ac0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500729 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/35bda3e7-0e70-4bfc-bd43-29fef60761d2-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500752 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f49211-e6af-49ed-823f-d1b54d73ac0b-metrics-tls\") pod \"dns-operator-744455d44c-c64mx\" (UID: \"13f49211-e6af-49ed-823f-d1b54d73ac0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500776 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z58g2\" (UniqueName: \"kubernetes.io/projected/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-kube-api-access-z58g2\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500828 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/ec9c6599-a073-4b08-9e39-20250709a458-kube-api-access-n2mxv\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500855 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-registration-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500877 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500900 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500938 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-plugins-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500961 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp2wf\" (UniqueName: \"kubernetes.io/projected/4f50294b-9edd-469f-8873-aeb445d7653b-kube-api-access-hp2wf\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.500983 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501006 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-proxy-tls\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501039 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-client-ca\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501064 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwkbn\" (UniqueName: \"kubernetes.io/projected/d46a6b01-c494-4912-8b32-a496ef03461a-kube-api-access-qwkbn\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501099 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcthd\" (UniqueName: \"kubernetes.io/projected/4765c779-803b-42f9-9598-c91eec1b5e9e-kube-api-access-rcthd\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501122 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ac57bc-5d54-4f2c-83bc-8068eead3c89-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501147 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-srv-cert\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501168 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4765c779-803b-42f9-9598-c91eec1b5e9e-service-ca-bundle\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501191 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-csi-data-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501222 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501244 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-mountpoint-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501267 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501282 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f50294b-9edd-469f-8873-aeb445d7653b-serving-cert\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501289 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec9c6599-a073-4b08-9e39-20250709a458-config-volume\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501318 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmftb\" (UniqueName: \"kubernetes.io/projected/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-kube-api-access-qmftb\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501336 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prk9b\" (UniqueName: \"kubernetes.io/projected/7f6c2195-9090-43be-bc5a-802f7e7256d8-kube-api-access-prk9b\") pod \"package-server-manager-789f6589d5-chsjj\" (UID: \"7f6c2195-9090-43be-bc5a-802f7e7256d8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501353 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35bda3e7-0e70-4bfc-bd43-29fef60761d2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501381 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501397 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cbe72dc-6cc5-48bb-921d-17466274c88a-config\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501654 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-certs\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501849 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec9c6599-a073-4b08-9e39-20250709a458-config-volume\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.501971 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cbe72dc-6cc5-48bb-921d-17466274c88a-config\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.502378 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72b08661-098c-44a0-82f1-7b1277bc65ba-cert\") pod \"ingress-canary-87wpn\" (UID: \"72b08661-098c-44a0-82f1-7b1277bc65ba\") " pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.502425 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.00241264 +0000 UTC m=+136.290443831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.503620 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.504106 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-metrics-certs\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.504205 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-images\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.505289 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.505384 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-registration-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.505602 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ac57bc-5d54-4f2c-83bc-8068eead3c89-config\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.505737 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8235f10d-349e-42ab-93e3-ed056e36bca4-apiservice-cert\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.506591 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec9c6599-a073-4b08-9e39-20250709a458-metrics-tls\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.506923 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4765c779-803b-42f9-9598-c91eec1b5e9e-service-ca-bundle\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.506921 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-client-ca\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.506983 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-bound-sa-token\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.506993 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-plugins-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.507431 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35bda3e7-0e70-4bfc-bd43-29fef60761d2-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.507051 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-default-certificate\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.508936 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.510042 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-csi-data-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.510260 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-node-bootstrap-token\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.510314 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47c44a11-8d04-4a95-9ce0-6436dbf6a035-profile-collector-cert\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.510843 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.511103 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.511138 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.511340 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6416d870-85ef-468a-a9a2-b2429b06a6d3-proxy-tls\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.512309 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cbe72dc-6cc5-48bb-921d-17466274c88a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.513288 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d46a6b01-c494-4912-8b32-a496ef03461a-signing-cabundle\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.513597 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-mountpoint-dir\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.514866 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47c44a11-8d04-4a95-9ce0-6436dbf6a035-srv-cert\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.514985 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-config\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.515181 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8235f10d-349e-42ab-93e3-ed056e36bca4-webhook-cert\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.517052 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-proxy-tls\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.518371 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8480834f-a7e6-4459-a2d2-73b968249ce7-secret-volume\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.518556 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7707cfc4-56b4-4b94-8383-954071ae4bd4-serving-cert\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:20 crc kubenswrapper[4928]: W1122 00:08:20.519069 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff645533_ce91_410f_9b30_134b6ab134f4.slice/crio-17eead7db9b5074fba7692abae0fbfbb203dec6cd4979c3d662b01fc56cca12b WatchSource:0}: Error finding container 17eead7db9b5074fba7692abae0fbfbb203dec6cd4979c3d662b01fc56cca12b: Status 404 returned error can't find the container with id 17eead7db9b5074fba7692abae0fbfbb203dec6cd4979c3d662b01fc56cca12b Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.519702 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.520442 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.522414 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f6c2195-9090-43be-bc5a-802f7e7256d8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chsjj\" (UID: \"7f6c2195-9090-43be-bc5a-802f7e7256d8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.525140 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4765c779-803b-42f9-9598-c91eec1b5e9e-stats-auth\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.527733 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p266w\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-kube-api-access-p266w\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.530691 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/35bda3e7-0e70-4bfc-bd43-29fef60761d2-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.534846 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-srv-cert\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.536375 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f49211-e6af-49ed-823f-d1b54d73ac0b-metrics-tls\") pod \"dns-operator-744455d44c-c64mx\" (UID: \"13f49211-e6af-49ed-823f-d1b54d73ac0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.537810 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.538149 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.539002 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d46a6b01-c494-4912-8b32-a496ef03461a-signing-key\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.540816 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef7a6ad-8a53-415c-b18e-098c96f5c630-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7xwgw\" (UID: \"cef7a6ad-8a53-415c-b18e-098c96f5c630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.541058 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.543011 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62ac57bc-5d54-4f2c-83bc-8068eead3c89-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.551468 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dd9d\" (UniqueName: \"kubernetes.io/projected/48e4652a-8c3c-4128-b0f9-8a5ebd25396e-kube-api-access-5dd9d\") pod \"openshift-config-operator-7777fb866f-n62d6\" (UID: \"48e4652a-8c3c-4128-b0f9-8a5ebd25396e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.559532 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.583595 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppzf\" (UniqueName: \"kubernetes.io/projected/9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3-kube-api-access-zppzf\") pod \"cluster-samples-operator-665b6dd947-rgn8z\" (UID: \"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.585860 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5"] Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.592778 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.602231 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.602684 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.102668859 +0000 UTC m=+136.390700050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.604500 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7m8h\" (UniqueName: \"kubernetes.io/projected/294bb8b1-6c6b-45a6-aa39-33e662ff511d-kube-api-access-v7m8h\") pod \"openshift-controller-manager-operator-756b6f6bc6-m9ggs\" (UID: \"294bb8b1-6c6b-45a6-aa39-33e662ff511d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.605748 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.621181 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.623162 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tmdvd"] Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.634536 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzf4w\" (UniqueName: \"kubernetes.io/projected/8480834f-a7e6-4459-a2d2-73b968249ce7-kube-api-access-kzf4w\") pod \"collect-profiles-29396160-c8ch5\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.636868 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.641130 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppd4m\" (UniqueName: \"kubernetes.io/projected/15e32f8b-4a69-4ca9-9cf1-965c9b734ed7-kube-api-access-ppd4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-xb2wb\" (UID: \"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.652820 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.660059 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.663516 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62ac57bc-5d54-4f2c-83bc-8068eead3c89-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gqrxk\" (UID: \"62ac57bc-5d54-4f2c-83bc-8068eead3c89\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.669862 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bttq9" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.676772 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.678347 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zx7h8"] Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.683015 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmftb\" (UniqueName: \"kubernetes.io/projected/0a5b438c-2c33-49fa-a7c9-98aef8f764f3-kube-api-access-qmftb\") pod \"olm-operator-6b444d44fb-qmvbl\" (UID: \"0a5b438c-2c33-49fa-a7c9-98aef8f764f3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.685019 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.703717 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.704141 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.204126726 +0000 UTC m=+136.492157917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.716693 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prk9b\" (UniqueName: \"kubernetes.io/projected/7f6c2195-9090-43be-bc5a-802f7e7256d8-kube-api-access-prk9b\") pod \"package-server-manager-789f6589d5-chsjj\" (UID: \"7f6c2195-9090-43be-bc5a-802f7e7256d8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.725205 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.725762 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-25w2t"] Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.735497 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35bda3e7-0e70-4bfc-bd43-29fef60761d2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.759661 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8h6l\" (UniqueName: \"kubernetes.io/projected/35bda3e7-0e70-4bfc-bd43-29fef60761d2-kube-api-access-z8h6l\") pod \"ingress-operator-5b745b69d9-c2zkd\" (UID: \"35bda3e7-0e70-4bfc-bd43-29fef60761d2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.759684 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xb2\" (UniqueName: \"kubernetes.io/projected/47c44a11-8d04-4a95-9ce0-6436dbf6a035-kube-api-access-z6xb2\") pod \"catalog-operator-68c6474976-nzv4d\" (UID: \"47c44a11-8d04-4a95-9ce0-6436dbf6a035\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.804508 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.805531 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.30551275 +0000 UTC m=+136.593543941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.810576 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z58g2\" (UniqueName: \"kubernetes.io/projected/f4d80d6f-91bd-4ea6-ac44-993a78316ba2-kube-api-access-z58g2\") pod \"csi-hostpathplugin-6fxj5\" (UID: \"f4d80d6f-91bd-4ea6-ac44-993a78316ba2\") " pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.821529 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" event={"ID":"d29a2448-633a-472f-a072-849f5d90297e","Type":"ContainerStarted","Data":"a0927263827733eb108a29c10d44d76e6b035ccadb0f84ddbbea1f762fa45e2a"} Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.824104 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.826218 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" event={"ID":"5b36deea-9635-49f8-862a-b9067233fd24","Type":"ContainerStarted","Data":"c0a76ff037d08d37ba7a7d5d9ba17be862f228c2f6a448a04a6fa73b0faa1417"} Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.826464 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/ec9c6599-a073-4b08-9e39-20250709a458-kube-api-access-n2mxv\") pod \"dns-default-dqtnp\" (UID: \"ec9c6599-a073-4b08-9e39-20250709a458\") " pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.828183 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" event={"ID":"ff645533-ce91-410f-9b30-134b6ab134f4","Type":"ContainerStarted","Data":"17eead7db9b5074fba7692abae0fbfbb203dec6cd4979c3d662b01fc56cca12b"} Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.830821 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" event={"ID":"42bd19a9-be1a-4c00-bf19-536e55a1ae11","Type":"ContainerStarted","Data":"ccc4055435ffae915b31b3deac007d11a9d1917cd7ee8c31634aed9f7a1676fe"} Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.831066 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.834435 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cbe72dc-6cc5-48bb-921d-17466274c88a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7wv9p\" (UID: \"8cbe72dc-6cc5-48bb-921d-17466274c88a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.843185 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.860076 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8fg\" (UniqueName: \"kubernetes.io/projected/2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7-kube-api-access-gg8fg\") pod \"machine-config-operator-74547568cd-ftrnl\" (UID: \"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.871121 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwkbn\" (UniqueName: \"kubernetes.io/projected/d46a6b01-c494-4912-8b32-a496ef03461a-kube-api-access-qwkbn\") pod \"service-ca-9c57cc56f-lhcmt\" (UID: \"d46a6b01-c494-4912-8b32-a496ef03461a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.883670 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcthd\" (UniqueName: \"kubernetes.io/projected/4765c779-803b-42f9-9598-c91eec1b5e9e-kube-api-access-rcthd\") pod \"router-default-5444994796-rz49f\" (UID: \"4765c779-803b-42f9-9598-c91eec1b5e9e\") " pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.892316 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.901991 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.903133 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csjcx\" (UniqueName: \"kubernetes.io/projected/13f49211-e6af-49ed-823f-d1b54d73ac0b-kube-api-access-csjcx\") pod \"dns-operator-744455d44c-c64mx\" (UID: \"13f49211-e6af-49ed-823f-d1b54d73ac0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.905713 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:20 crc kubenswrapper[4928]: E1122 00:08:20.906044 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.406031558 +0000 UTC m=+136.694062749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.922858 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq48r\" (UniqueName: \"kubernetes.io/projected/05f6d2a7-33e5-42fe-8d92-bdb258597a56-kube-api-access-fq48r\") pod \"migrator-59844c95c7-j72fl\" (UID: \"05f6d2a7-33e5-42fe-8d92-bdb258597a56\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.943233 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.949968 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp2wf\" (UniqueName: \"kubernetes.io/projected/4f50294b-9edd-469f-8873-aeb445d7653b-kube-api-access-hp2wf\") pod \"controller-manager-879f6c89f-hnm59\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.957153 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.964214 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdrth\" (UniqueName: \"kubernetes.io/projected/db624f52-ab96-42e2-a54c-62c2abc36274-kube-api-access-sdrth\") pod \"marketplace-operator-79b997595-54l8n\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.976380 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wqtln"] Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.987359 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86tf8\" (UniqueName: \"kubernetes.io/projected/8235f10d-349e-42ab-93e3-ed056e36bca4-kube-api-access-86tf8\") pod \"packageserver-d55dfcdfc-lpmm4\" (UID: \"8235f10d-349e-42ab-93e3-ed056e36bca4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:20 crc kubenswrapper[4928]: I1122 00:08:20.993614 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:20.999586 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg42k\" (UniqueName: \"kubernetes.io/projected/cef7a6ad-8a53-415c-b18e-098c96f5c630-kube-api-access-tg42k\") pod \"control-plane-machine-set-operator-78cbb6b69f-7xwgw\" (UID: \"cef7a6ad-8a53-415c-b18e-098c96f5c630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.002194 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.006353 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.006935 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.506900726 +0000 UTC m=+136.794931917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.007082 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.007442 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.507433762 +0000 UTC m=+136.795464953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.020552 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.029559 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc2f5\" (UniqueName: \"kubernetes.io/projected/2a03505d-6ba7-485d-9f59-6bf35b5c33d0-kube-api-access-fc2f5\") pod \"machine-config-server-vq7f7\" (UID: \"2a03505d-6ba7-485d-9f59-6bf35b5c33d0\") " pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.037614 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.043414 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.059858 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjn2t\" (UID: \"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.067441 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2qxn\" (UniqueName: \"kubernetes.io/projected/7707cfc4-56b4-4b94-8383-954071ae4bd4-kube-api-access-t2qxn\") pod \"service-ca-operator-777779d784-mp624\" (UID: \"7707cfc4-56b4-4b94-8383-954071ae4bd4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.072165 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29396160-qh9g4"] Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.080936 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nvm6\" (UniqueName: \"kubernetes.io/projected/6416d870-85ef-468a-a9a2-b2429b06a6d3-kube-api-access-5nvm6\") pod \"machine-config-controller-84d6567774-c4rrg\" (UID: \"6416d870-85ef-468a-a9a2-b2429b06a6d3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.090713 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.097727 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.102054 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmjqs\" (UniqueName: \"kubernetes.io/projected/72b08661-098c-44a0-82f1-7b1277bc65ba-kube-api-access-nmjqs\") pod \"ingress-canary-87wpn\" (UID: \"72b08661-098c-44a0-82f1-7b1277bc65ba\") " pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.105975 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.110553 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.110954 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.610939861 +0000 UTC m=+136.898971052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.115317 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.127403 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk"] Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.133666 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8hklq"] Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.139095 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-kxr5c"] Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.146756 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" Nov 22 00:08:21 crc kubenswrapper[4928]: W1122 00:08:21.152218 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4765c779_803b_42f9_9598_c91eec1b5e9e.slice/crio-0df569b8781f7af8d8c71ec7ef52a5f2f8c06de6d3fa75b25c5bcc581ba3cfec WatchSource:0}: Error finding container 0df569b8781f7af8d8c71ec7ef52a5f2f8c06de6d3fa75b25c5bcc581ba3cfec: Status 404 returned error can't find the container with id 0df569b8781f7af8d8c71ec7ef52a5f2f8c06de6d3fa75b25c5bcc581ba3cfec Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.219821 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.220118 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.720104162 +0000 UTC m=+137.008135353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.220214 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.220247 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.224830 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.234278 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.260544 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vq7f7" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.268536 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-87wpn" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.322526 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.323267 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.823252909 +0000 UTC m=+137.111284100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.423955 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.426990 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:21.926969144 +0000 UTC m=+137.215000335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.511411 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z"] Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.512558 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n62d6"] Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.524724 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.525084 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.025072429 +0000 UTC m=+137.313103620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.528920 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8"] Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.630587 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.630869 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.130858156 +0000 UTC m=+137.418889347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.731910 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.731969 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.231943681 +0000 UTC m=+137.519974872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.732417 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.732712 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.232700275 +0000 UTC m=+137.520731466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.833033 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.833422 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.333392628 +0000 UTC m=+137.621423879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.837855 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.838267 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.338253585 +0000 UTC m=+137.626284776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.898100 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29396160-qh9g4" event={"ID":"c10b9e4e-381b-425e-a1ce-3615a7580960","Type":"ContainerStarted","Data":"7a51a217fb7fb154e2bd61e8799b2d7026a166cf6ed8c8a4210bc544654cd45a"} Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.898151 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29396160-qh9g4" event={"ID":"c10b9e4e-381b-425e-a1ce-3615a7580960","Type":"ContainerStarted","Data":"d251d19ad3e9dab5e2061af0286e2f22f26c6b501796d96dd9b32584e2395a3d"} Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.940761 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.941091 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.441063123 +0000 UTC m=+137.729094314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.953243 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" event={"ID":"42bd19a9-be1a-4c00-bf19-536e55a1ae11","Type":"ContainerStarted","Data":"573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d"} Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.972303 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" event={"ID":"4a07cf7e-773c-46ea-989f-ec01540e81ee","Type":"ContainerStarted","Data":"a4077dd5eb3a25bd6d568b842b11eda3174a1e2ed1da2994fdd6640401623e68"} Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.975277 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" event={"ID":"4a07cf7e-773c-46ea-989f-ec01540e81ee","Type":"ContainerStarted","Data":"a53e1bb25e4204c46956479e431596effb0f735e69960aaae5d8d99c7106ba44"} Nov 22 00:08:21 crc kubenswrapper[4928]: E1122 00:08:21.974648 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.474636614 +0000 UTC m=+137.762667805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.974416 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.981518 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" event={"ID":"5b36deea-9635-49f8-862a-b9067233fd24","Type":"ContainerStarted","Data":"9ae7cee71b5c7caec818b68e051ae0c426146818767717f4470ee4633e571d6f"} Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.982290 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:21 crc kubenswrapper[4928]: I1122 00:08:21.998368 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" event={"ID":"0d99453e-a6fd-4903-9889-0f2229a26fbf","Type":"ContainerStarted","Data":"c2f78138ff8a88f0c61ab38c7bc9a0671253f161ec7955e063885dc13bbbf7eb"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.003108 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.006430 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" event={"ID":"48e4652a-8c3c-4128-b0f9-8a5ebd25396e","Type":"ContainerStarted","Data":"fb3efa7151ca926005f03157e9a12715bedd0be490f89a923e3203ddcc4d19c4"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.011899 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" event={"ID":"d29a2448-633a-472f-a072-849f5d90297e","Type":"ContainerStarted","Data":"6899489cd45f8b877bc9b4e58366c3c0b5d9535df116793ce6cb48066871f8c4"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.014927 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" event={"ID":"ef1ace38-76ed-4913-a9ac-8954a34820ae","Type":"ContainerStarted","Data":"f30d5fc391bdd8c0ced3c5ee0673946f7eb656d46f98ceacd2a18d97fc51cb20"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.025989 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8hklq" event={"ID":"44a56f60-93be-4061-b8e9-36c22b3ef721","Type":"ContainerStarted","Data":"3dca2c2e7aecdf91b5eccc5c638d35a590bbf08e95952bffd58b6da0f8127088"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.026030 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8hklq" event={"ID":"44a56f60-93be-4061-b8e9-36c22b3ef721","Type":"ContainerStarted","Data":"46c76289ff8cbf0189f412fb1994d0580b6c02d8e17a6655f865ccc8e74930a6"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.032487 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.046059 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vq7f7" event={"ID":"2a03505d-6ba7-485d-9f59-6bf35b5c33d0","Type":"ContainerStarted","Data":"b19350ac5dc246a47f587bc28b4abc606f9e233118ca2fd1fbe7c907fc1dec5b"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.047133 4928 patch_prober.go:28] interesting pod/console-operator-58897d9998-8hklq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.047211 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8hklq" podUID="44a56f60-93be-4061-b8e9-36c22b3ef721" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.048379 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" event={"ID":"24972a91-2eb5-4dd5-9146-718e7fd61403","Type":"ContainerStarted","Data":"588b3074f8c7bbffe3867b2e3bdd4c0801d362c47bfbbbf39be4192581894265"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.063229 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" event={"ID":"ff645533-ce91-410f-9b30-134b6ab134f4","Type":"ContainerStarted","Data":"8833d83fe9c061ed025a5bba11283f93c0309ac36b032783030191c5ec7a8908"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.063398 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" event={"ID":"ff645533-ce91-410f-9b30-134b6ab134f4","Type":"ContainerStarted","Data":"8ed276c6a24fb6f5fb47038125ac56c4f1833f91b68ae56b16ece8c5adfc23e4"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.065833 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rz49f" event={"ID":"4765c779-803b-42f9-9598-c91eec1b5e9e","Type":"ContainerStarted","Data":"4239c66f238bc8f07d7cc09cb131de23f469b62519ec99eabe1db641fddf65fb"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.065939 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rz49f" event={"ID":"4765c779-803b-42f9-9598-c91eec1b5e9e","Type":"ContainerStarted","Data":"0df569b8781f7af8d8c71ec7ef52a5f2f8c06de6d3fa75b25c5bcc581ba3cfec"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.076709 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.077053 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.577037839 +0000 UTC m=+137.865069030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.078362 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.081852 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" event={"ID":"2f0c943c-e71b-49f1-bce1-4d5bebec0bce","Type":"ContainerStarted","Data":"e60bcfc6f054f0e659e4b3d34d8688d2ca57b3fb88af51cfb7edc2d6f2f3dff1"} Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.082132 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" event={"ID":"2f0c943c-e71b-49f1-bce1-4d5bebec0bce","Type":"ContainerStarted","Data":"70a90c0b28baaddd4fd38c6b5210e49f0e910a09da86f182ae5ff7b53bba2178"} Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.083166 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.583154425 +0000 UTC m=+137.871185616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.181139 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.182332 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.682311281 +0000 UTC m=+137.970342472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.188698 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c64mx"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.204394 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.217009 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.223755 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bttq9"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.246725 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.262656 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.282951 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.283354 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.783340784 +0000 UTC m=+138.071371975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.295116 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.298978 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fkl9p"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.310291 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p"] Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.316611 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod294bb8b1_6c6b_45a6_aa39_33e662ff511d.slice/crio-348214177da26ea69b8c09e971b7150f2d72985b8d6a1fb51d94147ae7452baf WatchSource:0}: Error finding container 348214177da26ea69b8c09e971b7150f2d72985b8d6a1fb51d94147ae7452baf: Status 404 returned error can't find the container with id 348214177da26ea69b8c09e971b7150f2d72985b8d6a1fb51d94147ae7452baf Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.317496 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.323033 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.324421 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d"] Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.340596 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e32f8b_4a69_4ca9_9cf1_965c9b734ed7.slice/crio-cecec12ea6b2af516cb375a7b64cab67ebac4118c56c9c8df1944c4ed137b681 WatchSource:0}: Error finding container cecec12ea6b2af516cb375a7b64cab67ebac4118c56c9c8df1944c4ed137b681: Status 404 returned error can't find the container with id cecec12ea6b2af516cb375a7b64cab67ebac4118c56c9c8df1944c4ed137b681 Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.348846 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8480834f_a7e6_4459_a2d2_73b968249ce7.slice/crio-b5cc21d20c11c610634f75c7fa51e093cb135308aa4cf70d43428f054f48d807 WatchSource:0}: Error finding container b5cc21d20c11c610634f75c7fa51e093cb135308aa4cf70d43428f054f48d807: Status 404 returned error can't find the container with id b5cc21d20c11c610634f75c7fa51e093cb135308aa4cf70d43428f054f48d807 Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.360505 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5945d14_c3fa_42ae_91c3_64fe566c1b20.slice/crio-c8f815a37cf96fd90caa30be9a0435cf935519adfdbf99a67c0d08aca3dd1171 WatchSource:0}: Error finding container c8f815a37cf96fd90caa30be9a0435cf935519adfdbf99a67c0d08aca3dd1171: Status 404 returned error can't find the container with id c8f815a37cf96fd90caa30be9a0435cf935519adfdbf99a67c0d08aca3dd1171 Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.362907 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cbe72dc_6cc5_48bb_921d_17466274c88a.slice/crio-e2dff7539ed4e79c925f38a55082c8597f0814042cbaa7e2b436e8de46002bdc WatchSource:0}: Error finding container e2dff7539ed4e79c925f38a55082c8597f0814042cbaa7e2b436e8de46002bdc: Status 404 returned error can't find the container with id e2dff7539ed4e79c925f38a55082c8597f0814042cbaa7e2b436e8de46002bdc Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.374651 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf5ba26_bf7c_436f_9b1c_1d1a7f6bad37.slice/crio-09b10ea65a1fe77119c504201212bdfe5ff3e7cf87a9d83fbd347add20512ff6 WatchSource:0}: Error finding container 09b10ea65a1fe77119c504201212bdfe5ff3e7cf87a9d83fbd347add20512ff6: Status 404 returned error can't find the container with id 09b10ea65a1fe77119c504201212bdfe5ff3e7cf87a9d83fbd347add20512ff6 Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.384127 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.384442 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.884427909 +0000 UTC m=+138.172459100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.412545 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lhcmt"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.437312 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.486995 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dqtnp"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.487660 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.487994 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:22.987964949 +0000 UTC m=+138.275996130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.588379 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.588688 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.088669372 +0000 UTC m=+138.376700563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.618918 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.654022 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnm59"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.689192 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.689935 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.690178 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.190166881 +0000 UTC m=+138.478198072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.697056 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.698704 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fxj5"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.717213 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.750950 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.763880 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-87wpn"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.764623 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mp624"] Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.766225 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-d7zcl" podStartSLOduration=117.766205184 podStartE2EDuration="1m57.766205184s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:22.764360327 +0000 UTC m=+138.052391508" watchObservedRunningTime="2025-11-22 00:08:22.766205184 +0000 UTC m=+138.054236375" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.769033 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4"] Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.778484 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcef7a6ad_8a53_415c_b18e_098c96f5c630.slice/crio-eb561ed43ff2d2c65a3519fcd3925ee88aa3b7fdcf781b8e4319d530d5c04667 WatchSource:0}: Error finding container eb561ed43ff2d2c65a3519fcd3925ee88aa3b7fdcf781b8e4319d530d5c04667: Status 404 returned error can't find the container with id eb561ed43ff2d2c65a3519fcd3925ee88aa3b7fdcf781b8e4319d530d5c04667 Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.784644 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54l8n"] Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.785880 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f49e92c_7ef7_4cd8_9dda_fbc8e8694b79.slice/crio-a590b6d3864aecab5cc8e5878d1fd46384698f46bba7892437eac97a61396cac WatchSource:0}: Error finding container a590b6d3864aecab5cc8e5878d1fd46384698f46bba7892437eac97a61396cac: Status 404 returned error can't find the container with id a590b6d3864aecab5cc8e5878d1fd46384698f46bba7892437eac97a61396cac Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.793059 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.794114 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.294090012 +0000 UTC m=+138.582121203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.835717 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29396160-qh9g4" podStartSLOduration=117.835699417 podStartE2EDuration="1m57.835699417s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:22.834984695 +0000 UTC m=+138.123015886" watchObservedRunningTime="2025-11-22 00:08:22.835699417 +0000 UTC m=+138.123730608" Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.836885 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72b08661_098c_44a0_82f1_7b1277bc65ba.slice/crio-4648a2c0545532da7edba558cd11a454961603b8141022ec11e107ef49a7cad1 WatchSource:0}: Error finding container 4648a2c0545532da7edba558cd11a454961603b8141022ec11e107ef49a7cad1: Status 404 returned error can't find the container with id 4648a2c0545532da7edba558cd11a454961603b8141022ec11e107ef49a7cad1 Nov 22 00:08:22 crc kubenswrapper[4928]: W1122 00:08:22.840702 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8235f10d_349e_42ab_93e3_ed056e36bca4.slice/crio-9d7f0486b0e9b18264497ef45183ce2cefb8996b7e3dbdaee5d95dcb2a3e1fc0 WatchSource:0}: Error finding container 9d7f0486b0e9b18264497ef45183ce2cefb8996b7e3dbdaee5d95dcb2a3e1fc0: Status 404 returned error can't find the container with id 9d7f0486b0e9b18264497ef45183ce2cefb8996b7e3dbdaee5d95dcb2a3e1fc0 Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.883817 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zx7h8" podStartSLOduration=117.883800661 podStartE2EDuration="1m57.883800661s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:22.882066708 +0000 UTC m=+138.170097899" watchObservedRunningTime="2025-11-22 00:08:22.883800661 +0000 UTC m=+138.171831852" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.894916 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.895281 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.39526816 +0000 UTC m=+138.683299351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.976465 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rz49f" podStartSLOduration=117.976448409 podStartE2EDuration="1m57.976448409s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:22.921644091 +0000 UTC m=+138.209675272" watchObservedRunningTime="2025-11-22 00:08:22.976448409 +0000 UTC m=+138.264479600" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.976626 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" podStartSLOduration=117.976622474 podStartE2EDuration="1m57.976622474s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:22.973616023 +0000 UTC m=+138.261647214" watchObservedRunningTime="2025-11-22 00:08:22.976622474 +0000 UTC m=+138.264653665" Nov 22 00:08:22 crc kubenswrapper[4928]: I1122 00:08:22.998519 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:22 crc kubenswrapper[4928]: E1122 00:08:22.998890 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.498875561 +0000 UTC m=+138.786906752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.038274 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.039767 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fgph7" podStartSLOduration=118.039747584 podStartE2EDuration="1m58.039747584s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.000003475 +0000 UTC m=+138.288034676" watchObservedRunningTime="2025-11-22 00:08:23.039747584 +0000 UTC m=+138.327778775" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.040883 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" podStartSLOduration=118.040875029 podStartE2EDuration="1m58.040875029s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.038211108 +0000 UTC m=+138.326242299" watchObservedRunningTime="2025-11-22 00:08:23.040875029 +0000 UTC m=+138.328906220" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.052692 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:23 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:23 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:23 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.053510 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.080241 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8hklq" podStartSLOduration=118.080220226 podStartE2EDuration="1m58.080220226s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.07543292 +0000 UTC m=+138.363464131" watchObservedRunningTime="2025-11-22 00:08:23.080220226 +0000 UTC m=+138.368251417" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.089078 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" event={"ID":"62ac57bc-5d54-4f2c-83bc-8068eead3c89","Type":"ContainerStarted","Data":"2c759ac34b158375e36e54458d423cb9e47e913ed86e6ef790d51f0926dddf79"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.089122 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" event={"ID":"62ac57bc-5d54-4f2c-83bc-8068eead3c89","Type":"ContainerStarted","Data":"192ba7090223ba3729e6a733ffc587a04a79ff746947b8c7ed7d851cea59fe84"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.091337 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" event={"ID":"d46a6b01-c494-4912-8b32-a496ef03461a","Type":"ContainerStarted","Data":"3ea30badbc43c11c2ac36c534317bff7fbcc552d5aa26e87d0c563ad2370f04d"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.091361 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" event={"ID":"d46a6b01-c494-4912-8b32-a496ef03461a","Type":"ContainerStarted","Data":"88e6884e68539bdeb53dc03d31a17bca91295aafac60b2c9d7ee825bf95e358a"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.093448 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fkl9p" event={"ID":"a5945d14-c3fa-42ae-91c3-64fe566c1b20","Type":"ContainerStarted","Data":"12a648d8edbfbb72166e8e9d26777ed17839a0e4f3536f0d9b10e96ca749a66b"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.093472 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fkl9p" event={"ID":"a5945d14-c3fa-42ae-91c3-64fe566c1b20","Type":"ContainerStarted","Data":"c8f815a37cf96fd90caa30be9a0435cf935519adfdbf99a67c0d08aca3dd1171"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.099199 4928 generic.go:334] "Generic (PLEG): container finished" podID="ef1ace38-76ed-4913-a9ac-8954a34820ae" containerID="cb612c708afc8aeed3514a56f96185f0a71c3215bb995f5a627d1b2607167e79" exitCode=0 Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.099253 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" event={"ID":"ef1ace38-76ed-4913-a9ac-8954a34820ae","Type":"ContainerDied","Data":"cb612c708afc8aeed3514a56f96185f0a71c3215bb995f5a627d1b2607167e79"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.100030 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.100277 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.600266545 +0000 UTC m=+138.888297736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.101117 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" event={"ID":"05f6d2a7-33e5-42fe-8d92-bdb258597a56","Type":"ContainerStarted","Data":"5ee9ff732bf67f757010e99cc986e47ab5da23f47db3d7e70a7ad633b6933254"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.115441 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" event={"ID":"cef7a6ad-8a53-415c-b18e-098c96f5c630","Type":"ContainerStarted","Data":"eb561ed43ff2d2c65a3519fcd3925ee88aa3b7fdcf781b8e4319d530d5c04667"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.121283 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" event={"ID":"7f6c2195-9090-43be-bc5a-802f7e7256d8","Type":"ContainerStarted","Data":"c42fcb2c9a68dca3bf7fa00de28d9b2dfbba82e785e288d120fe6a1cc2b6138c"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.121318 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" event={"ID":"7f6c2195-9090-43be-bc5a-802f7e7256d8","Type":"ContainerStarted","Data":"3167aabee4fb1a549176cd9abe3821cddf01981358656d459a620249b1a95108"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.121329 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" event={"ID":"7f6c2195-9090-43be-bc5a-802f7e7256d8","Type":"ContainerStarted","Data":"e8feed447baec5458911b52825cdfa0f95a52de93e5789a7cbb26b2a06596ec3"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.122123 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.127898 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" event={"ID":"0a5b438c-2c33-49fa-a7c9-98aef8f764f3","Type":"ContainerStarted","Data":"a56b2aaff923ec48727f329abb5cf7718d1c21b0d9a1196d52f52e201a27f17b"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.127933 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" event={"ID":"0a5b438c-2c33-49fa-a7c9-98aef8f764f3","Type":"ContainerStarted","Data":"291a95c91b4ec1bf9af73421cd1d9829fbb9255ce05be896d478143eb37db54c"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.129279 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.136393 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" event={"ID":"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37","Type":"ContainerStarted","Data":"f0bfa108d33c49fbe3d4483f9c14b8b2fcec3e7aceaa307370293252768c2a76"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.136431 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" event={"ID":"4cf5ba26-bf7c-436f-9b1c-1d1a7f6bad37","Type":"ContainerStarted","Data":"09b10ea65a1fe77119c504201212bdfe5ff3e7cf87a9d83fbd347add20512ff6"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.138846 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" event={"ID":"8cbe72dc-6cc5-48bb-921d-17466274c88a","Type":"ContainerStarted","Data":"e2dff7539ed4e79c925f38a55082c8597f0814042cbaa7e2b436e8de46002bdc"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.140038 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lhcmt" podStartSLOduration=118.140012514 podStartE2EDuration="1m58.140012514s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.127017379 +0000 UTC m=+138.415048570" watchObservedRunningTime="2025-11-22 00:08:23.140012514 +0000 UTC m=+138.428043705" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.141422 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" event={"ID":"7707cfc4-56b4-4b94-8383-954071ae4bd4","Type":"ContainerStarted","Data":"1361389c2548559fe5a0d7d5ad4ba132f6e3b991486f45166019fc851c67eaf6"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.143759 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" event={"ID":"8235f10d-349e-42ab-93e3-ed056e36bca4","Type":"ContainerStarted","Data":"9d7f0486b0e9b18264497ef45183ce2cefb8996b7e3dbdaee5d95dcb2a3e1fc0"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.146681 4928 generic.go:334] "Generic (PLEG): container finished" podID="48e4652a-8c3c-4128-b0f9-8a5ebd25396e" containerID="3fe8e3143208c3782e6f00ce67d5c9be3a73b97d3ed02acf1fdc121263b1f6c6" exitCode=0 Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.146751 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" event={"ID":"48e4652a-8c3c-4128-b0f9-8a5ebd25396e","Type":"ContainerDied","Data":"3fe8e3143208c3782e6f00ce67d5c9be3a73b97d3ed02acf1fdc121263b1f6c6"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.149174 4928 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qmvbl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.149221 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" podUID="0a5b438c-2c33-49fa-a7c9-98aef8f764f3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.149557 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dqtnp" event={"ID":"ec9c6599-a073-4b08-9e39-20250709a458","Type":"ContainerStarted","Data":"f19ea3e7a58302129961df06b0cb762c2903f2020eef8bd7fa1c45382417f787"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.152945 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" event={"ID":"4f50294b-9edd-469f-8873-aeb445d7653b","Type":"ContainerStarted","Data":"02bb5cc8440809c7db6718231606610841a956a939aebbd4ec82278caf54d09f"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.160247 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" event={"ID":"47c44a11-8d04-4a95-9ce0-6436dbf6a035","Type":"ContainerStarted","Data":"4a0204e1dedf028f0fae5726c8cca937879388985abc5336fed31ac83c91350d"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.160298 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" event={"ID":"47c44a11-8d04-4a95-9ce0-6436dbf6a035","Type":"ContainerStarted","Data":"9fd25f48690a0924c340773239d490df08fe04eb5bb7bdf05348ba2ad565f27b"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.160606 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.161872 4928 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nzv4d container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.161902 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" podUID="47c44a11-8d04-4a95-9ce0-6436dbf6a035" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.168487 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" event={"ID":"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79","Type":"ContainerStarted","Data":"a590b6d3864aecab5cc8e5878d1fd46384698f46bba7892437eac97a61396cac"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.172365 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" event={"ID":"4a07cf7e-773c-46ea-989f-ec01540e81ee","Type":"ContainerStarted","Data":"0dcb47b48259acee73a0bfb17a865ab14e5f7e1ac89241be3ccdf389212cdcbb"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.177291 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" event={"ID":"2f0c943c-e71b-49f1-bce1-4d5bebec0bce","Type":"ContainerStarted","Data":"07ea83bbec908cf0a7081a573ecaa177ecf519f7a6c03ee11161b712cdd7fe2c"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.181379 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" event={"ID":"24972a91-2eb5-4dd5-9146-718e7fd61403","Type":"ContainerStarted","Data":"df5a783850b13f091cbb344cec4580929405404cc6745d721c75ea7ad9184ae4"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.198776 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" event={"ID":"8480834f-a7e6-4459-a2d2-73b968249ce7","Type":"ContainerStarted","Data":"22157fa0f5773d77a9c9c25987190f0678af7130d3867b47adad3e2f17be2b91"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.198847 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" event={"ID":"8480834f-a7e6-4459-a2d2-73b968249ce7","Type":"ContainerStarted","Data":"b5cc21d20c11c610634f75c7fa51e093cb135308aa4cf70d43428f054f48d807"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.200411 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.206159 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.706138186 +0000 UTC m=+138.994169377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.208338 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bttq9" event={"ID":"a49e097b-7213-4398-800a-02a39ebc297e","Type":"ContainerStarted","Data":"2f29acd7e2fe5eb4ea67c2c42f46fde1f20bf46e13a0036273a82c13e42be9c5"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.208365 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bttq9" event={"ID":"a49e097b-7213-4398-800a-02a39ebc297e","Type":"ContainerStarted","Data":"0863ea4cc04dced87f0b650effb13de595d8049a75aafa61b89f83f417ba1c1b"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.208379 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bttq9" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.208570 4928 patch_prober.go:28] interesting pod/downloads-7954f5f757-bttq9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.208601 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bttq9" podUID="a49e097b-7213-4398-800a-02a39ebc297e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.212213 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" podStartSLOduration=118.21218956 podStartE2EDuration="1m58.21218956s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.164986544 +0000 UTC m=+138.453017735" watchObservedRunningTime="2025-11-22 00:08:23.21218956 +0000 UTC m=+138.500220741" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.214442 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vq7f7" event={"ID":"2a03505d-6ba7-485d-9f59-6bf35b5c33d0","Type":"ContainerStarted","Data":"c59f343cc71299cdbd1f4ff08fed71a821d79024b7aadf3dbb8d7a3d39e2ddf3"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.219021 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" event={"ID":"294bb8b1-6c6b-45a6-aa39-33e662ff511d","Type":"ContainerStarted","Data":"79ca086f21004787349b7e3664a93b8c35246853f662358dbdf35ab0d2a8e65e"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.219084 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" event={"ID":"294bb8b1-6c6b-45a6-aa39-33e662ff511d","Type":"ContainerStarted","Data":"348214177da26ea69b8c09e971b7150f2d72985b8d6a1fb51d94147ae7452baf"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.222582 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-87wpn" event={"ID":"72b08661-098c-44a0-82f1-7b1277bc65ba","Type":"ContainerStarted","Data":"4648a2c0545532da7edba558cd11a454961603b8141022ec11e107ef49a7cad1"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.234274 4928 generic.go:334] "Generic (PLEG): container finished" podID="0d99453e-a6fd-4903-9889-0f2229a26fbf" containerID="afab6583a7e510c1b3ca2068fa4b15f56bc017cbec83a97b1b843750bbad36f1" exitCode=0 Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.234530 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" event={"ID":"0d99453e-a6fd-4903-9889-0f2229a26fbf","Type":"ContainerDied","Data":"afab6583a7e510c1b3ca2068fa4b15f56bc017cbec83a97b1b843750bbad36f1"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.241428 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" event={"ID":"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7","Type":"ContainerStarted","Data":"99467e8f158287cdf2ba4483e44d02a2ff91220e9883cc88e215fcd6fb7291af"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.241471 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" event={"ID":"15e32f8b-4a69-4ca9-9cf1-965c9b734ed7","Type":"ContainerStarted","Data":"cecec12ea6b2af516cb375a7b64cab67ebac4118c56c9c8df1944c4ed137b681"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.256190 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" event={"ID":"13f49211-e6af-49ed-823f-d1b54d73ac0b","Type":"ContainerStarted","Data":"edb15916931696fa933ed9979f8d40cb3257426f60ec14aab6bc2107d9fe97c1"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.256233 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" event={"ID":"13f49211-e6af-49ed-823f-d1b54d73ac0b","Type":"ContainerStarted","Data":"9da1053d2dfb599c26091d99b908a9a9b408b8829ab5de613b0686fee81d55a1"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.265007 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" event={"ID":"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3","Type":"ContainerStarted","Data":"00e0eadfb0047cbb0c72cc888a51fa314d256ba66f6e9354a6eb4cb41d2a355b"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.265042 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" event={"ID":"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3","Type":"ContainerStarted","Data":"ff3b14389a8b8b7850d667454eca874472f07ad3b7a478dd0ae634e716a6a399"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.265051 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" event={"ID":"9b218ffc-ca02-4e5b-bc3b-40509f3ee9e3","Type":"ContainerStarted","Data":"894b1d9580b0f82999dd8b829b3d9921f0e825d64e2822012e58496a5aa73886"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.266364 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" event={"ID":"35bda3e7-0e70-4bfc-bd43-29fef60761d2","Type":"ContainerStarted","Data":"0e9ad99f322a9b894a3d61984b160c57283a2df39fcb25584ac559259e4bf792"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.267110 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" event={"ID":"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7","Type":"ContainerStarted","Data":"cbe1459b359c98a2e7fc1dcd1d880a938508dadd1309e5cad649460a68cee0ec"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.267134 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" event={"ID":"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7","Type":"ContainerStarted","Data":"b7a86afc03567cf358f800ff5589d5b7e3a03b3c4bd18e71472f4c65a6ba3a4d"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.268916 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" event={"ID":"6416d870-85ef-468a-a9a2-b2429b06a6d3","Type":"ContainerStarted","Data":"a6e7ba9c2aad31d9ed70bff3db6924608e01fe543704ceaf73292cd290e11461"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.270549 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" event={"ID":"f4d80d6f-91bd-4ea6-ac44-993a78316ba2","Type":"ContainerStarted","Data":"e34177c501c13fe952abba0678f5e47de836132c398d341b4f9c57fa67ae4173"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.273154 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" event={"ID":"db624f52-ab96-42e2-a54c-62c2abc36274","Type":"ContainerStarted","Data":"6889fc0ffb27228b5070008f50fe1b555791a72c4495b3f63d63e476e24d9574"} Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.274990 4928 patch_prober.go:28] interesting pod/console-operator-58897d9998-8hklq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.275063 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8hklq" podUID="44a56f60-93be-4061-b8e9-36c22b3ef721" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.275458 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.281202 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.284003 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fkl9p" podStartSLOduration=118.283989834 podStartE2EDuration="1m58.283989834s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.24011069 +0000 UTC m=+138.528141881" watchObservedRunningTime="2025-11-22 00:08:23.283989834 +0000 UTC m=+138.572021025" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.284333 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m9ggs" podStartSLOduration=118.284327395 podStartE2EDuration="1m58.284327395s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.282991584 +0000 UTC m=+138.571022775" watchObservedRunningTime="2025-11-22 00:08:23.284327395 +0000 UTC m=+138.572358586" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.303200 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.305097 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.805086576 +0000 UTC m=+139.093117767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.319157 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wqtln" podStartSLOduration=118.319141263 podStartE2EDuration="1m58.319141263s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.317330158 +0000 UTC m=+138.605361349" watchObservedRunningTime="2025-11-22 00:08:23.319141263 +0000 UTC m=+138.607172454" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.408349 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.409229 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:23.909197893 +0000 UTC m=+139.197229084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.467041 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vq7f7" podStartSLOduration=5.467012992 podStartE2EDuration="5.467012992s" podCreationTimestamp="2025-11-22 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.407485731 +0000 UTC m=+138.695516922" watchObservedRunningTime="2025-11-22 00:08:23.467012992 +0000 UTC m=+138.755044183" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.472966 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g8hz8" podStartSLOduration=118.472935632 podStartE2EDuration="1m58.472935632s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.466840887 +0000 UTC m=+138.754872098" watchObservedRunningTime="2025-11-22 00:08:23.472935632 +0000 UTC m=+138.760966823" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.532035 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bttq9" podStartSLOduration=118.532005829 podStartE2EDuration="1m58.532005829s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.503477871 +0000 UTC m=+138.791509052" watchObservedRunningTime="2025-11-22 00:08:23.532005829 +0000 UTC m=+138.820037020" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.558677 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.559107 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.059080033 +0000 UTC m=+139.347111224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.560453 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" podStartSLOduration=118.560419823 podStartE2EDuration="1m58.560419823s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.5570311 +0000 UTC m=+138.845062291" watchObservedRunningTime="2025-11-22 00:08:23.560419823 +0000 UTC m=+138.848451014" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.560760 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" podStartSLOduration=118.560754223 podStartE2EDuration="1m58.560754223s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.53270567 +0000 UTC m=+138.820736871" watchObservedRunningTime="2025-11-22 00:08:23.560754223 +0000 UTC m=+138.848785414" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.604781 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5c8bn" podStartSLOduration=118.604760872 podStartE2EDuration="1m58.604760872s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.600966907 +0000 UTC m=+138.888998098" watchObservedRunningTime="2025-11-22 00:08:23.604760872 +0000 UTC m=+138.892792063" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.662094 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.662492 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.162458638 +0000 UTC m=+139.450489829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.663194 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.663843 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.163825199 +0000 UTC m=+139.451856390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.681671 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-25w2t" podStartSLOduration=118.68164445 podStartE2EDuration="1m58.68164445s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.678558237 +0000 UTC m=+138.966589428" watchObservedRunningTime="2025-11-22 00:08:23.68164445 +0000 UTC m=+138.969675641" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.771083 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.771559 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.271537636 +0000 UTC m=+139.559568827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.795074 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" podStartSLOduration=118.795047421 podStartE2EDuration="1m58.795047421s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.782412066 +0000 UTC m=+139.070443257" watchObservedRunningTime="2025-11-22 00:08:23.795047421 +0000 UTC m=+139.083078612" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.876442 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.876857 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.376841588 +0000 UTC m=+139.664872779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.877298 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rgn8z" podStartSLOduration=118.877266721 podStartE2EDuration="1m58.877266721s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.843324639 +0000 UTC m=+139.131355840" watchObservedRunningTime="2025-11-22 00:08:23.877266721 +0000 UTC m=+139.165297902" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.887834 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xb2wb" podStartSLOduration=118.887812152 podStartE2EDuration="1m58.887812152s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:23.887220064 +0000 UTC m=+139.175251245" watchObservedRunningTime="2025-11-22 00:08:23.887812152 +0000 UTC m=+139.175843343" Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.977368 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.977599 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.477572893 +0000 UTC m=+139.765604084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:23 crc kubenswrapper[4928]: I1122 00:08:23.977827 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:23 crc kubenswrapper[4928]: E1122 00:08:23.978124 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.478111449 +0000 UTC m=+139.766142640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.047660 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:24 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:24 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:24 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.047731 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.079137 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.079368 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.579322328 +0000 UTC m=+139.867353529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.079527 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.079919 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.579902906 +0000 UTC m=+139.867934097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.180396 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.180750 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.680736853 +0000 UTC m=+139.968768034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.278458 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dqtnp" event={"ID":"ec9c6599-a073-4b08-9e39-20250709a458","Type":"ContainerStarted","Data":"edf0e70f83e1b62773262cc0c0950d35e0d06ac859e3dcaa68d41aee21acd54d"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.279753 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" event={"ID":"4f50294b-9edd-469f-8873-aeb445d7653b","Type":"ContainerStarted","Data":"f569794b9d52eb63d8a588a84a5de3244e70f72f322d1a290059c03efa7a9b7f"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.280071 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.281163 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" event={"ID":"8cbe72dc-6cc5-48bb-921d-17466274c88a","Type":"ContainerStarted","Data":"3a7bff53ded61f6a65fd30f58b88a86ddae0fedfdc807a4ac1dec643f9654ecd"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.281462 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.281783 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.781767187 +0000 UTC m=+140.069798378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.282841 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" event={"ID":"cef7a6ad-8a53-415c-b18e-098c96f5c630","Type":"ContainerStarted","Data":"15fe8734e522ef4584f0a2ea3ab0ededa911cd2b8b2ffe2e92c6327d5c3af932"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.282979 4928 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hnm59 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.283007 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" podUID="4f50294b-9edd-469f-8873-aeb445d7653b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.284277 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" event={"ID":"35bda3e7-0e70-4bfc-bd43-29fef60761d2","Type":"ContainerStarted","Data":"bba73d25044e635fe5cf790d5b25750563c57f9321cef1f6e9ab81bdd02c832b"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.285821 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" event={"ID":"7707cfc4-56b4-4b94-8383-954071ae4bd4","Type":"ContainerStarted","Data":"17c41ed6d6c0d2af1fe3a784a4b358a99c65017a4a1357a23fd76599b03bdabc"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.287053 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" event={"ID":"6416d870-85ef-468a-a9a2-b2429b06a6d3","Type":"ContainerStarted","Data":"227db124582d0c3e7422b894edae122db3f75409723c8f24a25c383b51594644"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.288898 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" event={"ID":"db624f52-ab96-42e2-a54c-62c2abc36274","Type":"ContainerStarted","Data":"d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df"} Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.289614 4928 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nzv4d container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.289661 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" podUID="47c44a11-8d04-4a95-9ce0-6436dbf6a035" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.289840 4928 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qmvbl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.289874 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" podUID="0a5b438c-2c33-49fa-a7c9-98aef8f764f3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.289938 4928 patch_prober.go:28] interesting pod/downloads-7954f5f757-bttq9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.289994 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bttq9" podUID="a49e097b-7213-4398-800a-02a39ebc297e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.298486 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" podStartSLOduration=119.298470875 podStartE2EDuration="1m59.298470875s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:24.297423482 +0000 UTC m=+139.585454673" watchObservedRunningTime="2025-11-22 00:08:24.298470875 +0000 UTC m=+139.586502066" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.323388 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" podStartSLOduration=119.323369861 podStartE2EDuration="1m59.323369861s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:24.321373331 +0000 UTC m=+139.609404522" watchObservedRunningTime="2025-11-22 00:08:24.323369861 +0000 UTC m=+139.611401052" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.361671 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mp624" podStartSLOduration=119.361634216 podStartE2EDuration="1m59.361634216s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:24.341741871 +0000 UTC m=+139.629773062" watchObservedRunningTime="2025-11-22 00:08:24.361634216 +0000 UTC m=+139.649665437" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.372670 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7wv9p" podStartSLOduration=119.372656461 podStartE2EDuration="1m59.372656461s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:24.371376523 +0000 UTC m=+139.659407714" watchObservedRunningTime="2025-11-22 00:08:24.372656461 +0000 UTC m=+139.660687652" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.382772 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.385014 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.884992806 +0000 UTC m=+140.173023997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.396980 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gqrxk" podStartSLOduration=119.39694244 podStartE2EDuration="1m59.39694244s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:24.393958659 +0000 UTC m=+139.681989850" watchObservedRunningTime="2025-11-22 00:08:24.39694244 +0000 UTC m=+139.684973631" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.421538 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7xwgw" podStartSLOduration=119.421492617 podStartE2EDuration="1m59.421492617s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:24.419715323 +0000 UTC m=+139.707746514" watchObservedRunningTime="2025-11-22 00:08:24.421492617 +0000 UTC m=+139.709523808" Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.485944 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.486334 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:24.986318069 +0000 UTC m=+140.274349260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.586938 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.587072 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.087054973 +0000 UTC m=+140.375086164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.587282 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.587553 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.087544848 +0000 UTC m=+140.375576039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.688880 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.689110 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.189074076 +0000 UTC m=+140.477105277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.689269 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.689595 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.189583612 +0000 UTC m=+140.477614803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.790956 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.791213 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.291183672 +0000 UTC m=+140.579214903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.791424 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.792168 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.292141102 +0000 UTC m=+140.580172333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.892063 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.892200 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.392170455 +0000 UTC m=+140.680201656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.892294 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.892550 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.392541796 +0000 UTC m=+140.680572977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.993458 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.993614 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.49358934 +0000 UTC m=+140.781620531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:24 crc kubenswrapper[4928]: I1122 00:08:24.994011 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:24 crc kubenswrapper[4928]: E1122 00:08:24.994363 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.494346933 +0000 UTC m=+140.782378114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.045013 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:25 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:25 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:25 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.045069 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.095029 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.095297 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.595267343 +0000 UTC m=+140.883298544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.095431 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.095812 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.595784328 +0000 UTC m=+140.883815529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.196608 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.196893 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.696865883 +0000 UTC m=+140.984897104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.197337 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.197837 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.697821543 +0000 UTC m=+140.985852744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.298906 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.299009 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.79898043 +0000 UTC m=+141.087011621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.299446 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.299743 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.799734763 +0000 UTC m=+141.087765944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.302130 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" event={"ID":"35bda3e7-0e70-4bfc-bd43-29fef60761d2","Type":"ContainerStarted","Data":"021a258f48d1600b12a726c980a59eff5a8f1c7e818f77176b87b48ed6309e8c"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.305209 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" event={"ID":"8235f10d-349e-42ab-93e3-ed056e36bca4","Type":"ContainerStarted","Data":"aa07975b0915c09f49e8e20d2061d1195fb43e897d66db9bd88265550abcb4f6"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.306048 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.308697 4928 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lpmm4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.308768 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" podUID="8235f10d-349e-42ab-93e3-ed056e36bca4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.309192 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dqtnp" event={"ID":"ec9c6599-a073-4b08-9e39-20250709a458","Type":"ContainerStarted","Data":"e8838fc8999925cc23f6e42ea740c2c3bafb2734e3e0efa436e1720b671db95e"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.309355 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.316126 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" event={"ID":"0d99453e-a6fd-4903-9889-0f2229a26fbf","Type":"ContainerStarted","Data":"87c662b68b537ec74a377ef40758a87b692b29ed1b5ffef7fb8660300167d89c"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.318745 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" event={"ID":"48e4652a-8c3c-4128-b0f9-8a5ebd25396e","Type":"ContainerStarted","Data":"758ed678ae2b590d7772f360bf780066d518d385745548910a3d7d5a2b5d8dfb"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.318882 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.324329 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2zkd" podStartSLOduration=120.3243105 podStartE2EDuration="2m0.3243105s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.322909787 +0000 UTC m=+140.610941008" watchObservedRunningTime="2025-11-22 00:08:25.3243105 +0000 UTC m=+140.612341691" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.325927 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" event={"ID":"6f49e92c-7ef7-4cd8-9dda-fbc8e8694b79","Type":"ContainerStarted","Data":"f065daa7dd646f0bb48bdd9638b614cd90d22a173b039aa4a0a8acddf1759d6e"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.329569 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-87wpn" event={"ID":"72b08661-098c-44a0-82f1-7b1277bc65ba","Type":"ContainerStarted","Data":"2cd5ed4d81589bb0392c04392c7fd9353668a7a5dbbcfd2c09b5b68f2606f54a"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.335072 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" event={"ID":"ef1ace38-76ed-4913-a9ac-8954a34820ae","Type":"ContainerStarted","Data":"04e7056e47b460e40fed8f6b31ab1e3d7ae1eb8ee12b7296344778896089b81a"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.340911 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" event={"ID":"05f6d2a7-33e5-42fe-8d92-bdb258597a56","Type":"ContainerStarted","Data":"c03a15e73001ff23567c6170cb92f62083241cbafbee007dec9f069e5902eda3"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.340947 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" event={"ID":"05f6d2a7-33e5-42fe-8d92-bdb258597a56","Type":"ContainerStarted","Data":"1e7aaeb33a7df72b477ee0ed72bd0898e7c3d9f6619024624261f1b813d848d7"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.352102 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dqtnp" podStartSLOduration=7.352079345 podStartE2EDuration="7.352079345s" podCreationTimestamp="2025-11-22 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.348924169 +0000 UTC m=+140.636955370" watchObservedRunningTime="2025-11-22 00:08:25.352079345 +0000 UTC m=+140.640110546" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.357916 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" event={"ID":"13f49211-e6af-49ed-823f-d1b54d73ac0b","Type":"ContainerStarted","Data":"8dd513f62ac266a92616f812f9f8c9c4cd7ead558291699776c5b82a2eeb1f69"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.360293 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" event={"ID":"2bf436d0-26db-4a3b-b9c2-8ca66b3a37a7","Type":"ContainerStarted","Data":"c9a55d9b6d5884d4da8cb92ca6e2d6ccdcbbcc2ed64860474233c429a2ff769f"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.363818 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" event={"ID":"6416d870-85ef-468a-a9a2-b2429b06a6d3","Type":"ContainerStarted","Data":"8d45fea86e884593328020fa9c20394676af73a89d4d129c4aaef026c0f0616c"} Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.365161 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.366360 4928 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qmvbl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.366417 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" podUID="0a5b438c-2c33-49fa-a7c9-98aef8f764f3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.366471 4928 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nzv4d container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.366530 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" podUID="47c44a11-8d04-4a95-9ce0-6436dbf6a035" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.367016 4928 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hnm59 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.367237 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" podUID="4f50294b-9edd-469f-8873-aeb445d7653b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.371673 4928 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-54l8n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.371728 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" podUID="db624f52-ab96-42e2-a54c-62c2abc36274" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.380339 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" podStartSLOduration=120.380308654 podStartE2EDuration="2m0.380308654s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.37855865 +0000 UTC m=+140.666589841" watchObservedRunningTime="2025-11-22 00:08:25.380308654 +0000 UTC m=+140.668339845" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.400104 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.401608 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:25.901593731 +0000 UTC m=+141.189624922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.426997 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" podStartSLOduration=120.426981043 podStartE2EDuration="2m0.426981043s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.408292175 +0000 UTC m=+140.696323366" watchObservedRunningTime="2025-11-22 00:08:25.426981043 +0000 UTC m=+140.715012234" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.444089 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-c64mx" podStartSLOduration=120.444070234 podStartE2EDuration="2m0.444070234s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.440629538 +0000 UTC m=+140.728660729" watchObservedRunningTime="2025-11-22 00:08:25.444070234 +0000 UTC m=+140.732101425" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.484817 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjn2t" podStartSLOduration=120.484767471 podStartE2EDuration="2m0.484767471s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.466091503 +0000 UTC m=+140.754122694" watchObservedRunningTime="2025-11-22 00:08:25.484767471 +0000 UTC m=+140.772798672" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.486685 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ftrnl" podStartSLOduration=120.48667886 podStartE2EDuration="2m0.48667886s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.485697189 +0000 UTC m=+140.773728380" watchObservedRunningTime="2025-11-22 00:08:25.48667886 +0000 UTC m=+140.774710051" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.502474 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.502783 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.002770189 +0000 UTC m=+141.290801380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.507022 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c4rrg" podStartSLOduration=120.507003848 podStartE2EDuration="2m0.507003848s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.505877334 +0000 UTC m=+140.793908525" watchObservedRunningTime="2025-11-22 00:08:25.507003848 +0000 UTC m=+140.795035039" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.512101 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.513143 4928 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-7x8dk container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.513184 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" podUID="ef1ace38-76ed-4913-a9ac-8954a34820ae" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.23:8443/livez\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.515981 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.565593 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-87wpn" podStartSLOduration=7.56557968 podStartE2EDuration="7.56557968s" podCreationTimestamp="2025-11-22 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.564190097 +0000 UTC m=+140.852221288" watchObservedRunningTime="2025-11-22 00:08:25.56557968 +0000 UTC m=+140.853610871" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.566398 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" podStartSLOduration=120.566393834 podStartE2EDuration="2m0.566393834s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.54029104 +0000 UTC m=+140.828322231" watchObservedRunningTime="2025-11-22 00:08:25.566393834 +0000 UTC m=+140.854425025" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.603396 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.603984 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.103970238 +0000 UTC m=+141.392001429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.604019 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.604316 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.104308418 +0000 UTC m=+141.392339609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.699676 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j72fl" podStartSLOduration=120.699652808 podStartE2EDuration="2m0.699652808s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:25.585193916 +0000 UTC m=+140.873225117" watchObservedRunningTime="2025-11-22 00:08:25.699652808 +0000 UTC m=+140.987683999" Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.705531 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.705943 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.205923279 +0000 UTC m=+141.493954470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.807058 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.807368 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.307357204 +0000 UTC m=+141.595388395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.908254 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.908438 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.408412678 +0000 UTC m=+141.696443869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:25 crc kubenswrapper[4928]: I1122 00:08:25.908521 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:25 crc kubenswrapper[4928]: E1122 00:08:25.908819 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.40880844 +0000 UTC m=+141.696839631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.009183 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.009604 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.509589876 +0000 UTC m=+141.797621067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.044006 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:26 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:26 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:26 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.044067 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.110475 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.110996 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.610973441 +0000 UTC m=+141.899004712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.212255 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.212376 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.712350995 +0000 UTC m=+142.000382186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.212423 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.212741 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.712733626 +0000 UTC m=+142.000764807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.313271 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.313743 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.813724648 +0000 UTC m=+142.101755839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.370566 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" event={"ID":"0d99453e-a6fd-4903-9889-0f2229a26fbf","Type":"ContainerStarted","Data":"fbf926f347f2be8c498ff7d97db3f68e57921ed60961f866562098583c5b20ba"} Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.373631 4928 generic.go:334] "Generic (PLEG): container finished" podID="8480834f-a7e6-4459-a2d2-73b968249ce7" containerID="22157fa0f5773d77a9c9c25987190f0678af7130d3867b47adad3e2f17be2b91" exitCode=0 Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.373689 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" event={"ID":"8480834f-a7e6-4459-a2d2-73b968249ce7","Type":"ContainerDied","Data":"22157fa0f5773d77a9c9c25987190f0678af7130d3867b47adad3e2f17be2b91"} Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.376468 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" event={"ID":"f4d80d6f-91bd-4ea6-ac44-993a78316ba2","Type":"ContainerStarted","Data":"a96bf98eaa5e0bf8b2f70440c6ecc5782dd8abe27942ddd096fd622c4ef522ff"} Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.378057 4928 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-54l8n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.378097 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" podUID="db624f52-ab96-42e2-a54c-62c2abc36274" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.414735 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.420108 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:26.920092634 +0000 UTC m=+142.208123905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.440508 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" podStartSLOduration=121.440489404 podStartE2EDuration="2m1.440489404s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:26.401019063 +0000 UTC m=+141.689050264" watchObservedRunningTime="2025-11-22 00:08:26.440489404 +0000 UTC m=+141.728520595" Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.515880 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.516115 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.016082854 +0000 UTC m=+142.304114055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.516661 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.517024 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.017014633 +0000 UTC m=+142.305045824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.617716 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.618164 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.118147249 +0000 UTC m=+142.406178440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.718954 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.719362 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.219345967 +0000 UTC m=+142.507377158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.820356 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.820562 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.320531365 +0000 UTC m=+142.608562566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.820651 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.820977 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.320963468 +0000 UTC m=+142.608994659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.922137 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.922263 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.422246599 +0000 UTC m=+142.710277790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:26 crc kubenswrapper[4928]: I1122 00:08:26.922514 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:26 crc kubenswrapper[4928]: E1122 00:08:26.922814 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.422805896 +0000 UTC m=+142.710837087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.024102 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.024391 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.524366216 +0000 UTC m=+142.812397407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.024851 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.025193 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.52517798 +0000 UTC m=+142.813209171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.042365 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:27 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:27 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:27 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.042428 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.126034 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.126457 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.626440711 +0000 UTC m=+142.914471902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.228021 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.228393 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.728379061 +0000 UTC m=+143.016410252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.329537 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.329947 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.829933501 +0000 UTC m=+143.117964692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.377337 4928 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lpmm4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.377405 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" podUID="8235f10d-349e-42ab-93e3-ed056e36bca4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.382748 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" event={"ID":"f4d80d6f-91bd-4ea6-ac44-993a78316ba2","Type":"ContainerStarted","Data":"4568f6a271b3182549653a44b7955bde05d6eba6090c5ad2e270bc5064b92e3e"} Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.382778 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" event={"ID":"f4d80d6f-91bd-4ea6-ac44-993a78316ba2","Type":"ContainerStarted","Data":"b1dac0c73e65199620239292bde43542b1461b9c4c93ba9b0bcd715113318188"} Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.395128 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rrw57"] Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.396283 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.401116 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.431408 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-catalog-content\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.431552 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.431592 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-utilities\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.431657 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2vc\" (UniqueName: \"kubernetes.io/projected/09021957-55db-401c-a705-d3d308653cc9-kube-api-access-hz2vc\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.433873 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:27.933849722 +0000 UTC m=+143.221880903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.496093 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrw57"] Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.541272 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.541474 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.041448265 +0000 UTC m=+143.329479456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.541514 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-catalog-content\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.541578 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.541598 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-utilities\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.541633 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz2vc\" (UniqueName: \"kubernetes.io/projected/09021957-55db-401c-a705-d3d308653cc9-kube-api-access-hz2vc\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.541923 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.041907169 +0000 UTC m=+143.329938360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.541999 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-catalog-content\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.542201 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-utilities\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.578173 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz2vc\" (UniqueName: \"kubernetes.io/projected/09021957-55db-401c-a705-d3d308653cc9-kube-api-access-hz2vc\") pod \"community-operators-rrw57\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.603382 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wnvn4"] Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.604268 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.609120 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.642210 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnvn4"] Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.642491 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.642695 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-utilities\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.642721 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-catalog-content\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.642767 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6gcs\" (UniqueName: \"kubernetes.io/projected/b3d779fd-e36a-44a9-93d2-266fc054b0f9-kube-api-access-q6gcs\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.642898 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.14288592 +0000 UTC m=+143.430917111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.718442 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lpmm4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.720109 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.744413 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-utilities\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.744451 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-catalog-content\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.744552 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6gcs\" (UniqueName: \"kubernetes.io/projected/b3d779fd-e36a-44a9-93d2-266fc054b0f9-kube-api-access-q6gcs\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.744600 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.744888 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.244876094 +0000 UTC m=+143.532907285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.745335 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-utilities\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.745590 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-catalog-content\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.821741 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6gcs\" (UniqueName: \"kubernetes.io/projected/b3d779fd-e36a-44a9-93d2-266fc054b0f9-kube-api-access-q6gcs\") pod \"certified-operators-wnvn4\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.835735 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7vbgf"] Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.836739 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.863639 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.864247 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.364232524 +0000 UTC m=+143.652263705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.874207 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vbgf"] Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.927880 4928 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.952093 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.969701 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-utilities\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.969959 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqvm\" (UniqueName: \"kubernetes.io/projected/61f75f23-c9ca-453e-a2e4-6d78939022db-kube-api-access-wkqvm\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.970051 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-catalog-content\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:27 crc kubenswrapper[4928]: I1122 00:08:27.970142 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:27 crc kubenswrapper[4928]: E1122 00:08:27.971872 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.471859868 +0000 UTC m=+143.759891059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.000769 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rscpw"] Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.001917 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.005381 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rscpw"] Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.028047 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.045213 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:28 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:28 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:28 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.045641 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.070780 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.070870 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8480834f-a7e6-4459-a2d2-73b968249ce7-config-volume\") pod \"8480834f-a7e6-4459-a2d2-73b968249ce7\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.070917 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8480834f-a7e6-4459-a2d2-73b968249ce7-secret-volume\") pod \"8480834f-a7e6-4459-a2d2-73b968249ce7\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.070957 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzf4w\" (UniqueName: \"kubernetes.io/projected/8480834f-a7e6-4459-a2d2-73b968249ce7-kube-api-access-kzf4w\") pod \"8480834f-a7e6-4459-a2d2-73b968249ce7\" (UID: \"8480834f-a7e6-4459-a2d2-73b968249ce7\") " Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.071098 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-catalog-content\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.071139 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-utilities\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.071208 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-utilities\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.071237 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkqvm\" (UniqueName: \"kubernetes.io/projected/61f75f23-c9ca-453e-a2e4-6d78939022db-kube-api-access-wkqvm\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.071266 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnph\" (UniqueName: \"kubernetes.io/projected/8b9405dc-ca44-4909-a7ca-499354e67172-kube-api-access-llnph\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.071282 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-catalog-content\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.071682 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-catalog-content\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.071878 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.571864241 +0000 UTC m=+143.859895432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.072121 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-utilities\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.072981 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8480834f-a7e6-4459-a2d2-73b968249ce7-config-volume" (OuterVolumeSpecName: "config-volume") pod "8480834f-a7e6-4459-a2d2-73b968249ce7" (UID: "8480834f-a7e6-4459-a2d2-73b968249ce7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.093841 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8480834f-a7e6-4459-a2d2-73b968249ce7-kube-api-access-kzf4w" (OuterVolumeSpecName: "kube-api-access-kzf4w") pod "8480834f-a7e6-4459-a2d2-73b968249ce7" (UID: "8480834f-a7e6-4459-a2d2-73b968249ce7"). InnerVolumeSpecName "kube-api-access-kzf4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.096626 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkqvm\" (UniqueName: \"kubernetes.io/projected/61f75f23-c9ca-453e-a2e4-6d78939022db-kube-api-access-wkqvm\") pod \"community-operators-7vbgf\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.123389 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480834f-a7e6-4459-a2d2-73b968249ce7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8480834f-a7e6-4459-a2d2-73b968249ce7" (UID: "8480834f-a7e6-4459-a2d2-73b968249ce7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.174013 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-catalog-content\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.174898 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-utilities\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.175011 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-catalog-content\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.175124 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llnph\" (UniqueName: \"kubernetes.io/projected/8b9405dc-ca44-4909-a7ca-499354e67172-kube-api-access-llnph\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.175184 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.175230 4928 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8480834f-a7e6-4459-a2d2-73b968249ce7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.175242 4928 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8480834f-a7e6-4459-a2d2-73b968249ce7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.175253 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzf4w\" (UniqueName: \"kubernetes.io/projected/8480834f-a7e6-4459-a2d2-73b968249ce7-kube-api-access-kzf4w\") on node \"crc\" DevicePath \"\"" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.175485 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-utilities\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.175593 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.675575996 +0000 UTC m=+143.963607187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.224469 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llnph\" (UniqueName: \"kubernetes.io/projected/8b9405dc-ca44-4909-a7ca-499354e67172-kube-api-access-llnph\") pod \"certified-operators-rscpw\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.225203 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.225431 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480834f-a7e6-4459-a2d2-73b968249ce7" containerName="collect-profiles" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.225448 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480834f-a7e6-4459-a2d2-73b968249ce7" containerName="collect-profiles" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.225544 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480834f-a7e6-4459-a2d2-73b968249ce7" containerName="collect-profiles" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.226000 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.227676 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.228871 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.229178 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.246718 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.279413 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.279652 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760e859b-a35f-4663-8c09-5a1225237ecd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.279716 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760e859b-a35f-4663-8c09-5a1225237ecd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.279873 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.779856428 +0000 UTC m=+144.067887619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.374039 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.395734 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760e859b-a35f-4663-8c09-5a1225237ecd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.395782 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.395858 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760e859b-a35f-4663-8c09-5a1225237ecd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.396147 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760e859b-a35f-4663-8c09-5a1225237ecd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.396373 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:28.896362411 +0000 UTC m=+144.184393602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.430057 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" event={"ID":"8480834f-a7e6-4459-a2d2-73b968249ce7","Type":"ContainerDied","Data":"b5cc21d20c11c610634f75c7fa51e093cb135308aa4cf70d43428f054f48d807"} Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.430106 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5cc21d20c11c610634f75c7fa51e093cb135308aa4cf70d43428f054f48d807" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.430174 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396160-c8ch5" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.471222 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760e859b-a35f-4663-8c09-5a1225237ecd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.507981 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" event={"ID":"f4d80d6f-91bd-4ea6-ac44-993a78316ba2","Type":"ContainerStarted","Data":"dba1f58e91bd191e4159afa13e1bf3b65ce119f3dec105353ea24e16da80cd54"} Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.508752 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.509418 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:29.00939098 +0000 UTC m=+144.297422171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.517534 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrw57"] Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.543921 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6fxj5" podStartSLOduration=10.54390174 podStartE2EDuration="10.54390174s" podCreationTimestamp="2025-11-22 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:28.538125194 +0000 UTC m=+143.826156385" watchObservedRunningTime="2025-11-22 00:08:28.54390174 +0000 UTC m=+143.831932931" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.611675 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnvn4"] Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.612359 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.613750 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:29.113734374 +0000 UTC m=+144.401765565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.632767 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.714438 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.715207 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 00:08:29.21519158 +0000 UTC m=+144.503222771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.779470 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vbgf"] Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.825201 4928 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-22T00:08:27.927901541Z","Handler":null,"Name":""} Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.827156 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:28 crc kubenswrapper[4928]: E1122 00:08:28.827558 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 00:08:29.327540458 +0000 UTC m=+144.615571649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dc949" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.906506 4928 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.906547 4928 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.914193 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rscpw"] Nov 22 00:08:28 crc kubenswrapper[4928]: W1122 00:08:28.923606 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b9405dc_ca44_4909_a7ca_499354e67172.slice/crio-cc6c72809e1f1e377e150b06720e6667c16b77a768068d8ae553db2fcd8fd7a4 WatchSource:0}: Error finding container cc6c72809e1f1e377e150b06720e6667c16b77a768068d8ae553db2fcd8fd7a4: Status 404 returned error can't find the container with id cc6c72809e1f1e377e150b06720e6667c16b77a768068d8ae553db2fcd8fd7a4 Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.928328 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 00:08:28 crc kubenswrapper[4928]: I1122 00:08:28.935191 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.031900 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.050824 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:29 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:29 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:29 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.050884 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.055874 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.066159 4928 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.066203 4928 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.128095 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dc949\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.332373 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.513715 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"760e859b-a35f-4663-8c09-5a1225237ecd","Type":"ContainerStarted","Data":"9ad1880a859e2c5c44406cdf4930f18762e5c743c7e98d2a903c9f627cbb40ea"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.513783 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"760e859b-a35f-4663-8c09-5a1225237ecd","Type":"ContainerStarted","Data":"bfeadc8c964704f7f9713b70b0aa1f8860bda93115bca4fb07eebb8f6dfd9ace"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.515262 4928 generic.go:334] "Generic (PLEG): container finished" podID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerID="fdde58316ad233a981402220994c9ec80a9bed9d813249ac51d1ff9fee2c9f6c" exitCode=0 Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.515307 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vbgf" event={"ID":"61f75f23-c9ca-453e-a2e4-6d78939022db","Type":"ContainerDied","Data":"fdde58316ad233a981402220994c9ec80a9bed9d813249ac51d1ff9fee2c9f6c"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.515325 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vbgf" event={"ID":"61f75f23-c9ca-453e-a2e4-6d78939022db","Type":"ContainerStarted","Data":"61a6a9723aa8a58a5338479f43b5ae2ec6ce27aafee648aed277e10bb48f3b2e"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.517278 4928 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.521080 4928 generic.go:334] "Generic (PLEG): container finished" podID="8b9405dc-ca44-4909-a7ca-499354e67172" containerID="ca588e0609f6eee7806e03ee6890e65ece21e26a963663fd25f059de905e9623" exitCode=0 Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.521174 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rscpw" event={"ID":"8b9405dc-ca44-4909-a7ca-499354e67172","Type":"ContainerDied","Data":"ca588e0609f6eee7806e03ee6890e65ece21e26a963663fd25f059de905e9623"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.521203 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rscpw" event={"ID":"8b9405dc-ca44-4909-a7ca-499354e67172","Type":"ContainerStarted","Data":"cc6c72809e1f1e377e150b06720e6667c16b77a768068d8ae553db2fcd8fd7a4"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.523769 4928 generic.go:334] "Generic (PLEG): container finished" podID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerID="80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf" exitCode=0 Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.523826 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnvn4" event={"ID":"b3d779fd-e36a-44a9-93d2-266fc054b0f9","Type":"ContainerDied","Data":"80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.525546 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnvn4" event={"ID":"b3d779fd-e36a-44a9-93d2-266fc054b0f9","Type":"ContainerStarted","Data":"a735391ef56a702e5a446e0437046998253032fd194328663c7747d9aab434f6"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.531286 4928 generic.go:334] "Generic (PLEG): container finished" podID="09021957-55db-401c-a705-d3d308653cc9" containerID="72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1" exitCode=0 Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.532058 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrw57" event={"ID":"09021957-55db-401c-a705-d3d308653cc9","Type":"ContainerDied","Data":"72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.532142 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrw57" event={"ID":"09021957-55db-401c-a705-d3d308653cc9","Type":"ContainerStarted","Data":"346ae5979c5e6aed80a33cb7f31c8d480dca4fbbea067b03ea834ff325ee11ad"} Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.540193 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.5401741960000002 podStartE2EDuration="1.540174196s" podCreationTimestamp="2025-11-22 00:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:29.532148882 +0000 UTC m=+144.820180083" watchObservedRunningTime="2025-11-22 00:08:29.540174196 +0000 UTC m=+144.828205387" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.606880 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc949"] Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.654711 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.667200 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n62d6" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.779973 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hd5kl"] Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.781713 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.786780 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.794723 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd5kl"] Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.849195 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbsf\" (UniqueName: \"kubernetes.io/projected/0caf4878-137a-405f-9173-3187372a5135-kube-api-access-rcbsf\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.849252 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-catalog-content\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.849279 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-utilities\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.950945 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcbsf\" (UniqueName: \"kubernetes.io/projected/0caf4878-137a-405f-9173-3187372a5135-kube-api-access-rcbsf\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.951005 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-catalog-content\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.951039 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-utilities\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.951559 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-utilities\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.951692 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-catalog-content\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:29 crc kubenswrapper[4928]: I1122 00:08:29.972637 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcbsf\" (UniqueName: \"kubernetes.io/projected/0caf4878-137a-405f-9173-3187372a5135-kube-api-access-rcbsf\") pod \"redhat-marketplace-hd5kl\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.041233 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:30 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:30 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:30 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.041294 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.103803 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.185188 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btw4s"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.186165 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.212000 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btw4s"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.254421 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9tqq\" (UniqueName: \"kubernetes.io/projected/4f09254e-9e39-40fb-a69e-42d933b4cf7e-kube-api-access-d9tqq\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.254727 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-catalog-content\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.254747 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-utilities\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.308712 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd5kl"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.355872 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9tqq\" (UniqueName: \"kubernetes.io/projected/4f09254e-9e39-40fb-a69e-42d933b4cf7e-kube-api-access-d9tqq\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.355923 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-catalog-content\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.355942 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-utilities\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.356411 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-utilities\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.356422 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-catalog-content\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.380461 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9tqq\" (UniqueName: \"kubernetes.io/projected/4f09254e-9e39-40fb-a69e-42d933b4cf7e-kube-api-access-d9tqq\") pod \"redhat-marketplace-btw4s\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.521098 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.526536 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.526543 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7x8dk" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.543371 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.544888 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.555295 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.557279 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.557586 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.561264 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.562171 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.562373 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.598176 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8hklq" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.603970 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hhl2z"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.623169 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.635083 4928 generic.go:334] "Generic (PLEG): container finished" podID="760e859b-a35f-4663-8c09-5a1225237ecd" containerID="9ad1880a859e2c5c44406cdf4930f18762e5c743c7e98d2a903c9f627cbb40ea" exitCode=0 Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.635182 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.635211 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"760e859b-a35f-4663-8c09-5a1225237ecd","Type":"ContainerDied","Data":"9ad1880a859e2c5c44406cdf4930f18762e5c743c7e98d2a903c9f627cbb40ea"} Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.653739 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.654468 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.657015 4928 patch_prober.go:28] interesting pod/console-f9d7485db-fkl9p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.657128 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fkl9p" podUID="a5945d14-c3fa-42ae-91c3-64fe566c1b20" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.662786 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4ks\" (UniqueName: \"kubernetes.io/projected/f0703e2f-994d-4ba3-8461-1c90ae1a063d-kube-api-access-dq4ks\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.662945 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-catalog-content\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.662992 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e120bf-63dd-4299-b548-9c0afc78bc65-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.663021 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e120bf-63dd-4299-b548-9c0afc78bc65-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.663078 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-utilities\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.668249 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhl2z"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.670805 4928 patch_prober.go:28] interesting pod/downloads-7954f5f757-bttq9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.670856 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bttq9" podUID="a49e097b-7213-4398-800a-02a39ebc297e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.671133 4928 patch_prober.go:28] interesting pod/downloads-7954f5f757-bttq9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.671171 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bttq9" podUID="a49e097b-7213-4398-800a-02a39ebc297e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.674693 4928 generic.go:334] "Generic (PLEG): container finished" podID="0caf4878-137a-405f-9173-3187372a5135" containerID="5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f" exitCode=0 Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.674883 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd5kl" event={"ID":"0caf4878-137a-405f-9173-3187372a5135","Type":"ContainerDied","Data":"5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f"} Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.674934 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd5kl" event={"ID":"0caf4878-137a-405f-9173-3187372a5135","Type":"ContainerStarted","Data":"487a3a715961dc71072fdf3b62df3ffed600b202f3a4baa7e9b2d8c6a7f8b47b"} Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.685465 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" event={"ID":"670d4817-7813-4326-9eb3-d3319ecc7ae2","Type":"ContainerStarted","Data":"1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a"} Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.685532 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" event={"ID":"670d4817-7813-4326-9eb3-d3319ecc7ae2","Type":"ContainerStarted","Data":"6ee354e303059c877bba50a194a6cadacfaef67cd6774f4a4fbeff7453d18785"} Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.764545 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-catalog-content\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.764626 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e120bf-63dd-4299-b548-9c0afc78bc65-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.764693 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e120bf-63dd-4299-b548-9c0afc78bc65-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.764719 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-utilities\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.764948 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq4ks\" (UniqueName: \"kubernetes.io/projected/f0703e2f-994d-4ba3-8461-1c90ae1a063d-kube-api-access-dq4ks\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.768288 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-utilities\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.768474 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-catalog-content\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.768587 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e120bf-63dd-4299-b548-9c0afc78bc65-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.800254 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e120bf-63dd-4299-b548-9c0afc78bc65-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.810496 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq4ks\" (UniqueName: \"kubernetes.io/projected/f0703e2f-994d-4ba3-8461-1c90ae1a063d-kube-api-access-dq4ks\") pod \"redhat-operators-hhl2z\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.820573 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dgmwk"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.822022 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.834220 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmvbl" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.850414 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgmwk"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.857720 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" podStartSLOduration=125.857698265 podStartE2EDuration="2m5.857698265s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:30.826219177 +0000 UTC m=+146.114250368" watchObservedRunningTime="2025-11-22 00:08:30.857698265 +0000 UTC m=+146.145729456" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.911075 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzv4d" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.941149 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btw4s"] Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.963182 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.969661 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-utilities\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.969707 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-catalog-content\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.969733 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grmbq\" (UniqueName: \"kubernetes.io/projected/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-kube-api-access-grmbq\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:30 crc kubenswrapper[4928]: I1122 00:08:30.985224 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:08:30 crc kubenswrapper[4928]: W1122 00:08:30.986316 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f09254e_9e39_40fb_a69e_42d933b4cf7e.slice/crio-4f4da62347207f1de72dec5cffe2ef94c4bae8349e69c295e5545e6f6c65f4b9 WatchSource:0}: Error finding container 4f4da62347207f1de72dec5cffe2ef94c4bae8349e69c295e5545e6f6c65f4b9: Status 404 returned error can't find the container with id 4f4da62347207f1de72dec5cffe2ef94c4bae8349e69c295e5545e6f6c65f4b9 Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.040393 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.042808 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:31 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:31 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:31 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.042884 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.070997 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-utilities\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.071045 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-catalog-content\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.071075 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grmbq\" (UniqueName: \"kubernetes.io/projected/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-kube-api-access-grmbq\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.071821 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-utilities\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.071772 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-catalog-content\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.095379 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grmbq\" (UniqueName: \"kubernetes.io/projected/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-kube-api-access-grmbq\") pod \"redhat-operators-dgmwk\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.148229 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.225685 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.231339 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.292655 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.456676 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhl2z"] Nov 22 00:08:31 crc kubenswrapper[4928]: W1122 00:08:31.533204 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0703e2f_994d_4ba3_8461_1c90ae1a063d.slice/crio-61e00fed36745915a14f8030df59f301cb196b10ed78f54dc4920b69126d3c65 WatchSource:0}: Error finding container 61e00fed36745915a14f8030df59f301cb196b10ed78f54dc4920b69126d3c65: Status 404 returned error can't find the container with id 61e00fed36745915a14f8030df59f301cb196b10ed78f54dc4920b69126d3c65 Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.590420 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgmwk"] Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.697641 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"05e120bf-63dd-4299-b548-9c0afc78bc65","Type":"ContainerStarted","Data":"f14b5fc07c09844a1b81a3c071a1162b35ec54940412cd5b6b6d2112e6aff6c0"} Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.703990 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhl2z" event={"ID":"f0703e2f-994d-4ba3-8461-1c90ae1a063d","Type":"ContainerStarted","Data":"61e00fed36745915a14f8030df59f301cb196b10ed78f54dc4920b69126d3c65"} Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.727429 4928 generic.go:334] "Generic (PLEG): container finished" podID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerID="168bdce5d402fcd2d01a837bb879d98a9c7421c88f271d2f15a014e715d03fb1" exitCode=0 Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.728643 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btw4s" event={"ID":"4f09254e-9e39-40fb-a69e-42d933b4cf7e","Type":"ContainerDied","Data":"168bdce5d402fcd2d01a837bb879d98a9c7421c88f271d2f15a014e715d03fb1"} Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.728686 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btw4s" event={"ID":"4f09254e-9e39-40fb-a69e-42d933b4cf7e","Type":"ContainerStarted","Data":"4f4da62347207f1de72dec5cffe2ef94c4bae8349e69c295e5545e6f6c65f4b9"} Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.747152 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgmwk" event={"ID":"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb","Type":"ContainerStarted","Data":"47f20cc533869313fd6d5517bffa6a783377997ced2e2a82a867ec2673f4c9f0"} Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.747535 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:31 crc kubenswrapper[4928]: I1122 00:08:31.759920 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-kxr5c" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.109662 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:32 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:32 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:32 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.109897 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.292611 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.412250 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760e859b-a35f-4663-8c09-5a1225237ecd-kube-api-access\") pod \"760e859b-a35f-4663-8c09-5a1225237ecd\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.412306 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760e859b-a35f-4663-8c09-5a1225237ecd-kubelet-dir\") pod \"760e859b-a35f-4663-8c09-5a1225237ecd\" (UID: \"760e859b-a35f-4663-8c09-5a1225237ecd\") " Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.412738 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/760e859b-a35f-4663-8c09-5a1225237ecd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "760e859b-a35f-4663-8c09-5a1225237ecd" (UID: "760e859b-a35f-4663-8c09-5a1225237ecd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.427250 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760e859b-a35f-4663-8c09-5a1225237ecd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "760e859b-a35f-4663-8c09-5a1225237ecd" (UID: "760e859b-a35f-4663-8c09-5a1225237ecd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.514490 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760e859b-a35f-4663-8c09-5a1225237ecd-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.514527 4928 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760e859b-a35f-4663-8c09-5a1225237ecd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.761409 4928 generic.go:334] "Generic (PLEG): container finished" podID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerID="c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8" exitCode=0 Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.761580 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhl2z" event={"ID":"f0703e2f-994d-4ba3-8461-1c90ae1a063d","Type":"ContainerDied","Data":"c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8"} Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.768346 4928 generic.go:334] "Generic (PLEG): container finished" podID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerID="bcb8c451a9d7b01d32d5963965c526bfe01a837c0c946aa33c5fe2abc94c1116" exitCode=0 Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.768411 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgmwk" event={"ID":"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb","Type":"ContainerDied","Data":"bcb8c451a9d7b01d32d5963965c526bfe01a837c0c946aa33c5fe2abc94c1116"} Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.792204 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"05e120bf-63dd-4299-b548-9c0afc78bc65","Type":"ContainerStarted","Data":"26b5f211bb6effa7542a270fc3f5df0cb723e6c2dafbad0084c63ff654415936"} Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.796709 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.796699 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"760e859b-a35f-4663-8c09-5a1225237ecd","Type":"ContainerDied","Data":"bfeadc8c964704f7f9713b70b0aa1f8860bda93115bca4fb07eebb8f6dfd9ace"} Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.796775 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfeadc8c964704f7f9713b70b0aa1f8860bda93115bca4fb07eebb8f6dfd9ace" Nov 22 00:08:32 crc kubenswrapper[4928]: I1122 00:08:32.841157 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.841135521 podStartE2EDuration="2.841135521s" podCreationTimestamp="2025-11-22 00:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:08:32.84044963 +0000 UTC m=+148.128480831" watchObservedRunningTime="2025-11-22 00:08:32.841135521 +0000 UTC m=+148.129166732" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.040399 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:33 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:33 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:33 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.040457 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.429873 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.430599 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.431120 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.457100 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.533117 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.537103 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.545318 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.551578 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.635421 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.642845 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.835821 4928 generic.go:334] "Generic (PLEG): container finished" podID="05e120bf-63dd-4299-b548-9c0afc78bc65" containerID="26b5f211bb6effa7542a270fc3f5df0cb723e6c2dafbad0084c63ff654415936" exitCode=0 Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.835962 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"05e120bf-63dd-4299-b548-9c0afc78bc65","Type":"ContainerDied","Data":"26b5f211bb6effa7542a270fc3f5df0cb723e6c2dafbad0084c63ff654415936"} Nov 22 00:08:33 crc kubenswrapper[4928]: I1122 00:08:33.862928 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 00:08:34 crc kubenswrapper[4928]: I1122 00:08:34.041125 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:34 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:34 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:34 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:34 crc kubenswrapper[4928]: I1122 00:08:34.041195 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:34 crc kubenswrapper[4928]: W1122 00:08:34.431008 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-34ff671f53aca1727a5b3ad7f0ed8dd5cd72e959a4c0db14e62331d31454b4d5 WatchSource:0}: Error finding container 34ff671f53aca1727a5b3ad7f0ed8dd5cd72e959a4c0db14e62331d31454b4d5: Status 404 returned error can't find the container with id 34ff671f53aca1727a5b3ad7f0ed8dd5cd72e959a4c0db14e62331d31454b4d5 Nov 22 00:08:34 crc kubenswrapper[4928]: I1122 00:08:34.871175 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"969a1b42715b725f5a4bc921c9403bc09ec6bbe507bff906389a22467287ec39"} Nov 22 00:08:34 crc kubenswrapper[4928]: I1122 00:08:34.875206 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d7156e1b994a41f3a560151923fafa9029b4a82b3009d2ab9c079dd15ce2df0"} Nov 22 00:08:34 crc kubenswrapper[4928]: I1122 00:08:34.888926 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"34ff671f53aca1727a5b3ad7f0ed8dd5cd72e959a4c0db14e62331d31454b4d5"} Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.042023 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:35 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:35 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:35 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.042346 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.397952 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.579239 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e120bf-63dd-4299-b548-9c0afc78bc65-kube-api-access\") pod \"05e120bf-63dd-4299-b548-9c0afc78bc65\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.579278 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e120bf-63dd-4299-b548-9c0afc78bc65-kubelet-dir\") pod \"05e120bf-63dd-4299-b548-9c0afc78bc65\" (UID: \"05e120bf-63dd-4299-b548-9c0afc78bc65\") " Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.579442 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05e120bf-63dd-4299-b548-9c0afc78bc65-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05e120bf-63dd-4299-b548-9c0afc78bc65" (UID: "05e120bf-63dd-4299-b548-9c0afc78bc65"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.579953 4928 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05e120bf-63dd-4299-b548-9c0afc78bc65-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.607748 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e120bf-63dd-4299-b548-9c0afc78bc65-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05e120bf-63dd-4299-b548-9c0afc78bc65" (UID: "05e120bf-63dd-4299-b548-9c0afc78bc65"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.685985 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05e120bf-63dd-4299-b548-9c0afc78bc65-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.906257 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0b967bc42af776e599661fef6aacd9a019b2365fe7f70050489248bf6717d7f3"} Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.906563 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.909503 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.909502 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"05e120bf-63dd-4299-b548-9c0afc78bc65","Type":"ContainerDied","Data":"f14b5fc07c09844a1b81a3c071a1162b35ec54940412cd5b6b6d2112e6aff6c0"} Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.909613 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f14b5fc07c09844a1b81a3c071a1162b35ec54940412cd5b6b6d2112e6aff6c0" Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.917240 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"864ff9d0a2441091b9286e75c9095aa61e181f02b24239a7f0157083bb3e26bf"} Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.932276 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c94be9c18f1cf9de4f002d371876e0bd491387b4a570e507791c099b51647096"} Nov 22 00:08:35 crc kubenswrapper[4928]: I1122 00:08:35.955361 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dqtnp" Nov 22 00:08:36 crc kubenswrapper[4928]: I1122 00:08:36.046323 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:36 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:36 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:36 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:36 crc kubenswrapper[4928]: I1122 00:08:36.046382 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:36 crc kubenswrapper[4928]: I1122 00:08:36.708096 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:08:36 crc kubenswrapper[4928]: I1122 00:08:36.708233 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:08:37 crc kubenswrapper[4928]: I1122 00:08:37.043034 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:37 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:37 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:37 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:37 crc kubenswrapper[4928]: I1122 00:08:37.043101 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:38 crc kubenswrapper[4928]: I1122 00:08:38.041464 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:38 crc kubenswrapper[4928]: [-]has-synced failed: reason withheld Nov 22 00:08:38 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:38 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:38 crc kubenswrapper[4928]: I1122 00:08:38.042111 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:39 crc kubenswrapper[4928]: I1122 00:08:39.040968 4928 patch_prober.go:28] interesting pod/router-default-5444994796-rz49f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 00:08:39 crc kubenswrapper[4928]: [+]has-synced ok Nov 22 00:08:39 crc kubenswrapper[4928]: [+]process-running ok Nov 22 00:08:39 crc kubenswrapper[4928]: healthz check failed Nov 22 00:08:39 crc kubenswrapper[4928]: I1122 00:08:39.041039 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rz49f" podUID="4765c779-803b-42f9-9598-c91eec1b5e9e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.047561 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.051061 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rz49f" Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.654552 4928 patch_prober.go:28] interesting pod/console-f9d7485db-fkl9p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.654623 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fkl9p" podUID="a5945d14-c3fa-42ae-91c3-64fe566c1b20" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.670482 4928 patch_prober.go:28] interesting pod/downloads-7954f5f757-bttq9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.670521 4928 patch_prober.go:28] interesting pod/downloads-7954f5f757-bttq9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.670589 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bttq9" podUID="a49e097b-7213-4398-800a-02a39ebc297e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 22 00:08:40 crc kubenswrapper[4928]: I1122 00:08:40.670615 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bttq9" podUID="a49e097b-7213-4398-800a-02a39ebc297e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 22 00:08:48 crc kubenswrapper[4928]: I1122 00:08:48.405128 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:48 crc kubenswrapper[4928]: I1122 00:08:48.422485 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3aa30f0-2a7f-4b60-b254-3e894a581f89-metrics-certs\") pod \"network-metrics-daemon-kx6zl\" (UID: \"b3aa30f0-2a7f-4b60-b254-3e894a581f89\") " pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:48 crc kubenswrapper[4928]: I1122 00:08:48.570931 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kx6zl" Nov 22 00:08:49 crc kubenswrapper[4928]: I1122 00:08:49.343383 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:08:50 crc kubenswrapper[4928]: I1122 00:08:50.668233 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:50 crc kubenswrapper[4928]: I1122 00:08:50.673231 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fkl9p" Nov 22 00:08:50 crc kubenswrapper[4928]: I1122 00:08:50.711989 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bttq9" Nov 22 00:08:58 crc kubenswrapper[4928]: I1122 00:08:58.092546 4928 generic.go:334] "Generic (PLEG): container finished" podID="c10b9e4e-381b-425e-a1ce-3615a7580960" containerID="7a51a217fb7fb154e2bd61e8799b2d7026a166cf6ed8c8a4210bc544654cd45a" exitCode=0 Nov 22 00:08:58 crc kubenswrapper[4928]: I1122 00:08:58.092649 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29396160-qh9g4" event={"ID":"c10b9e4e-381b-425e-a1ce-3615a7580960","Type":"ContainerDied","Data":"7a51a217fb7fb154e2bd61e8799b2d7026a166cf6ed8c8a4210bc544654cd45a"} Nov 22 00:09:00 crc kubenswrapper[4928]: I1122 00:09:00.846667 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chsjj" Nov 22 00:09:04 crc kubenswrapper[4928]: E1122 00:09:04.655847 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 00:09:04 crc kubenswrapper[4928]: E1122 00:09:04.656419 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d9tqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-btw4s_openshift-marketplace(4f09254e-9e39-40fb-a69e-42d933b4cf7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:04 crc kubenswrapper[4928]: E1122 00:09:04.657687 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-btw4s" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" Nov 22 00:09:06 crc kubenswrapper[4928]: I1122 00:09:06.708184 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:09:06 crc kubenswrapper[4928]: I1122 00:09:06.708469 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:09:08 crc kubenswrapper[4928]: E1122 00:09:08.442740 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btw4s" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" Nov 22 00:09:11 crc kubenswrapper[4928]: E1122 00:09:11.512260 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 00:09:11 crc kubenswrapper[4928]: E1122 00:09:11.512745 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6gcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wnvn4_openshift-marketplace(b3d779fd-e36a-44a9-93d2-266fc054b0f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:11 crc kubenswrapper[4928]: E1122 00:09:11.513907 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wnvn4" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" Nov 22 00:09:13 crc kubenswrapper[4928]: I1122 00:09:13.558441 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 00:09:18 crc kubenswrapper[4928]: E1122 00:09:18.110923 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wnvn4" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" Nov 22 00:09:22 crc kubenswrapper[4928]: E1122 00:09:22.321344 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 00:09:22 crc kubenswrapper[4928]: E1122 00:09:22.321751 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq4ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hhl2z_openshift-marketplace(f0703e2f-994d-4ba3-8461-1c90ae1a063d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:22 crc kubenswrapper[4928]: E1122 00:09:22.323462 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hhl2z" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" Nov 22 00:09:24 crc kubenswrapper[4928]: E1122 00:09:24.182484 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hhl2z" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.239144 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29396160-qh9g4" event={"ID":"c10b9e4e-381b-425e-a1ce-3615a7580960","Type":"ContainerDied","Data":"d251d19ad3e9dab5e2061af0286e2f22f26c6b501796d96dd9b32584e2395a3d"} Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.239445 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d251d19ad3e9dab5e2061af0286e2f22f26c6b501796d96dd9b32584e2395a3d" Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.249535 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.383341 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c10b9e4e-381b-425e-a1ce-3615a7580960-serviceca\") pod \"c10b9e4e-381b-425e-a1ce-3615a7580960\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.383942 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8ph\" (UniqueName: \"kubernetes.io/projected/c10b9e4e-381b-425e-a1ce-3615a7580960-kube-api-access-kw8ph\") pod \"c10b9e4e-381b-425e-a1ce-3615a7580960\" (UID: \"c10b9e4e-381b-425e-a1ce-3615a7580960\") " Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.384186 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10b9e4e-381b-425e-a1ce-3615a7580960-serviceca" (OuterVolumeSpecName: "serviceca") pod "c10b9e4e-381b-425e-a1ce-3615a7580960" (UID: "c10b9e4e-381b-425e-a1ce-3615a7580960"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.390686 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10b9e4e-381b-425e-a1ce-3615a7580960-kube-api-access-kw8ph" (OuterVolumeSpecName: "kube-api-access-kw8ph") pod "c10b9e4e-381b-425e-a1ce-3615a7580960" (UID: "c10b9e4e-381b-425e-a1ce-3615a7580960"). InnerVolumeSpecName "kube-api-access-kw8ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:09:24 crc kubenswrapper[4928]: E1122 00:09:24.415891 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 00:09:24 crc kubenswrapper[4928]: E1122 00:09:24.416090 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hz2vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rrw57_openshift-marketplace(09021957-55db-401c-a705-d3d308653cc9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:24 crc kubenswrapper[4928]: E1122 00:09:24.417467 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rrw57" podUID="09021957-55db-401c-a705-d3d308653cc9" Nov 22 00:09:24 crc kubenswrapper[4928]: W1122 00:09:24.438958 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3aa30f0_2a7f_4b60_b254_3e894a581f89.slice/crio-b1a4e1dbc96af2c7ca729353c8ba4b63426c7af23401010ce7cb45814c158fb6 WatchSource:0}: Error finding container b1a4e1dbc96af2c7ca729353c8ba4b63426c7af23401010ce7cb45814c158fb6: Status 404 returned error can't find the container with id b1a4e1dbc96af2c7ca729353c8ba4b63426c7af23401010ce7cb45814c158fb6 Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.440738 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kx6zl"] Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.485760 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw8ph\" (UniqueName: \"kubernetes.io/projected/c10b9e4e-381b-425e-a1ce-3615a7580960-kube-api-access-kw8ph\") on node \"crc\" DevicePath \"\"" Nov 22 00:09:24 crc kubenswrapper[4928]: I1122 00:09:24.485849 4928 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c10b9e4e-381b-425e-a1ce-3615a7580960-serviceca\") on node \"crc\" DevicePath \"\"" Nov 22 00:09:24 crc kubenswrapper[4928]: E1122 00:09:24.862176 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 00:09:24 crc kubenswrapper[4928]: E1122 00:09:24.862755 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcbsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hd5kl_openshift-marketplace(0caf4878-137a-405f-9173-3187372a5135): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:24 crc kubenswrapper[4928]: E1122 00:09:24.864090 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hd5kl" podUID="0caf4878-137a-405f-9173-3187372a5135" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.114069 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.114228 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wkqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7vbgf_openshift-marketplace(61f75f23-c9ca-453e-a2e4-6d78939022db): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.115394 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7vbgf" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" Nov 22 00:09:25 crc kubenswrapper[4928]: I1122 00:09:25.246473 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" event={"ID":"b3aa30f0-2a7f-4b60-b254-3e894a581f89","Type":"ContainerStarted","Data":"0aba76ecbc0ad7b13239cf356caa8cbb7e8e7ea961eea94c4e39abdd5fdbba6a"} Nov 22 00:09:25 crc kubenswrapper[4928]: I1122 00:09:25.246572 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" event={"ID":"b3aa30f0-2a7f-4b60-b254-3e894a581f89","Type":"ContainerStarted","Data":"58171a90d70f26fa8f71dfefb1a0dfcc047ed523777718b2e075340aa70304d5"} Nov 22 00:09:25 crc kubenswrapper[4928]: I1122 00:09:25.246587 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kx6zl" event={"ID":"b3aa30f0-2a7f-4b60-b254-3e894a581f89","Type":"ContainerStarted","Data":"b1a4e1dbc96af2c7ca729353c8ba4b63426c7af23401010ce7cb45814c158fb6"} Nov 22 00:09:25 crc kubenswrapper[4928]: I1122 00:09:25.246525 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29396160-qh9g4" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.252999 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7vbgf" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.253090 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hd5kl" podUID="0caf4878-137a-405f-9173-3187372a5135" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.253132 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rrw57" podUID="09021957-55db-401c-a705-d3d308653cc9" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.341610 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.341822 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grmbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dgmwk_openshift-marketplace(9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.343037 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dgmwk" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.513407 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.513712 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llnph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rscpw_openshift-marketplace(8b9405dc-ca44-4909-a7ca-499354e67172): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:09:25 crc kubenswrapper[4928]: E1122 00:09:25.515659 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rscpw" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" Nov 22 00:09:26 crc kubenswrapper[4928]: E1122 00:09:26.253909 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rscpw" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" Nov 22 00:09:26 crc kubenswrapper[4928]: E1122 00:09:26.254256 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dgmwk" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" Nov 22 00:09:26 crc kubenswrapper[4928]: I1122 00:09:26.311764 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kx6zl" podStartSLOduration=181.311741842 podStartE2EDuration="3m1.311741842s" podCreationTimestamp="2025-11-22 00:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:09:26.308355989 +0000 UTC m=+201.596387180" watchObservedRunningTime="2025-11-22 00:09:26.311741842 +0000 UTC m=+201.599773033" Nov 22 00:09:28 crc kubenswrapper[4928]: I1122 00:09:28.265004 4928 generic.go:334] "Generic (PLEG): container finished" podID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerID="af14876971f37263f4ab6ce97d1216dca47a562a075f40ce0b7a9d80dac16088" exitCode=0 Nov 22 00:09:28 crc kubenswrapper[4928]: I1122 00:09:28.265101 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btw4s" event={"ID":"4f09254e-9e39-40fb-a69e-42d933b4cf7e","Type":"ContainerDied","Data":"af14876971f37263f4ab6ce97d1216dca47a562a075f40ce0b7a9d80dac16088"} Nov 22 00:09:30 crc kubenswrapper[4928]: I1122 00:09:30.277094 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btw4s" event={"ID":"4f09254e-9e39-40fb-a69e-42d933b4cf7e","Type":"ContainerStarted","Data":"d135b9da67490e443227c0909214290d9eb4f161b61ca50b19ebedb0ee5951f2"} Nov 22 00:09:30 crc kubenswrapper[4928]: I1122 00:09:30.299151 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btw4s" podStartSLOduration=2.8962861589999997 podStartE2EDuration="1m0.299133698s" podCreationTimestamp="2025-11-22 00:08:30 +0000 UTC" firstStartedPulling="2025-11-22 00:08:31.729167215 +0000 UTC m=+147.017198406" lastFinishedPulling="2025-11-22 00:09:29.132014754 +0000 UTC m=+204.420045945" observedRunningTime="2025-11-22 00:09:30.296349433 +0000 UTC m=+205.584380624" watchObservedRunningTime="2025-11-22 00:09:30.299133698 +0000 UTC m=+205.587164879" Nov 22 00:09:30 crc kubenswrapper[4928]: I1122 00:09:30.527210 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:09:30 crc kubenswrapper[4928]: I1122 00:09:30.527266 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:09:31 crc kubenswrapper[4928]: I1122 00:09:31.712331 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-btw4s" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="registry-server" probeResult="failure" output=< Nov 22 00:09:31 crc kubenswrapper[4928]: timeout: failed to connect service ":50051" within 1s Nov 22 00:09:31 crc kubenswrapper[4928]: > Nov 22 00:09:36 crc kubenswrapper[4928]: I1122 00:09:36.316866 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnvn4" event={"ID":"b3d779fd-e36a-44a9-93d2-266fc054b0f9","Type":"ContainerDied","Data":"d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69"} Nov 22 00:09:36 crc kubenswrapper[4928]: I1122 00:09:36.316892 4928 generic.go:334] "Generic (PLEG): container finished" podID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerID="d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69" exitCode=0 Nov 22 00:09:36 crc kubenswrapper[4928]: I1122 00:09:36.707665 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:09:36 crc kubenswrapper[4928]: I1122 00:09:36.708048 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:09:36 crc kubenswrapper[4928]: I1122 00:09:36.708106 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:09:36 crc kubenswrapper[4928]: I1122 00:09:36.708771 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:09:36 crc kubenswrapper[4928]: I1122 00:09:36.708924 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70" gracePeriod=600 Nov 22 00:09:37 crc kubenswrapper[4928]: I1122 00:09:37.325084 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70" exitCode=0 Nov 22 00:09:37 crc kubenswrapper[4928]: I1122 00:09:37.325135 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70"} Nov 22 00:09:38 crc kubenswrapper[4928]: I1122 00:09:38.335453 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"89b6744e44d745238ba09799ad0c4c13a9b38fe687a1383d25bf8855e3b2d8b7"} Nov 22 00:09:40 crc kubenswrapper[4928]: I1122 00:09:40.692734 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:09:40 crc kubenswrapper[4928]: I1122 00:09:40.734276 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:09:41 crc kubenswrapper[4928]: I1122 00:09:41.555783 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btw4s"] Nov 22 00:09:42 crc kubenswrapper[4928]: I1122 00:09:42.358327 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btw4s" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="registry-server" containerID="cri-o://d135b9da67490e443227c0909214290d9eb4f161b61ca50b19ebedb0ee5951f2" gracePeriod=2 Nov 22 00:09:45 crc kubenswrapper[4928]: I1122 00:09:45.380444 4928 generic.go:334] "Generic (PLEG): container finished" podID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerID="d135b9da67490e443227c0909214290d9eb4f161b61ca50b19ebedb0ee5951f2" exitCode=0 Nov 22 00:09:45 crc kubenswrapper[4928]: I1122 00:09:45.380556 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btw4s" event={"ID":"4f09254e-9e39-40fb-a69e-42d933b4cf7e","Type":"ContainerDied","Data":"d135b9da67490e443227c0909214290d9eb4f161b61ca50b19ebedb0ee5951f2"} Nov 22 00:09:45 crc kubenswrapper[4928]: I1122 00:09:45.905375 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.087323 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-catalog-content\") pod \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.087380 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-utilities\") pod \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.087410 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9tqq\" (UniqueName: \"kubernetes.io/projected/4f09254e-9e39-40fb-a69e-42d933b4cf7e-kube-api-access-d9tqq\") pod \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\" (UID: \"4f09254e-9e39-40fb-a69e-42d933b4cf7e\") " Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.089228 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-utilities" (OuterVolumeSpecName: "utilities") pod "4f09254e-9e39-40fb-a69e-42d933b4cf7e" (UID: "4f09254e-9e39-40fb-a69e-42d933b4cf7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.093948 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f09254e-9e39-40fb-a69e-42d933b4cf7e-kube-api-access-d9tqq" (OuterVolumeSpecName: "kube-api-access-d9tqq") pod "4f09254e-9e39-40fb-a69e-42d933b4cf7e" (UID: "4f09254e-9e39-40fb-a69e-42d933b4cf7e"). InnerVolumeSpecName "kube-api-access-d9tqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.109493 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f09254e-9e39-40fb-a69e-42d933b4cf7e" (UID: "4f09254e-9e39-40fb-a69e-42d933b4cf7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.190100 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.190157 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f09254e-9e39-40fb-a69e-42d933b4cf7e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.190177 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9tqq\" (UniqueName: \"kubernetes.io/projected/4f09254e-9e39-40fb-a69e-42d933b4cf7e-kube-api-access-d9tqq\") on node \"crc\" DevicePath \"\"" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.392064 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btw4s" event={"ID":"4f09254e-9e39-40fb-a69e-42d933b4cf7e","Type":"ContainerDied","Data":"4f4da62347207f1de72dec5cffe2ef94c4bae8349e69c295e5545e6f6c65f4b9"} Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.392121 4928 scope.go:117] "RemoveContainer" containerID="d135b9da67490e443227c0909214290d9eb4f161b61ca50b19ebedb0ee5951f2" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.392228 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btw4s" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.441638 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btw4s"] Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.445383 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btw4s"] Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.455550 4928 scope.go:117] "RemoveContainer" containerID="af14876971f37263f4ab6ce97d1216dca47a562a075f40ce0b7a9d80dac16088" Nov 22 00:09:46 crc kubenswrapper[4928]: I1122 00:09:46.487623 4928 scope.go:117] "RemoveContainer" containerID="168bdce5d402fcd2d01a837bb879d98a9c7421c88f271d2f15a014e715d03fb1" Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.401363 4928 generic.go:334] "Generic (PLEG): container finished" podID="09021957-55db-401c-a705-d3d308653cc9" containerID="0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900" exitCode=0 Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.401438 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrw57" event={"ID":"09021957-55db-401c-a705-d3d308653cc9","Type":"ContainerDied","Data":"0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900"} Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.408075 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhl2z" event={"ID":"f0703e2f-994d-4ba3-8461-1c90ae1a063d","Type":"ContainerStarted","Data":"fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4"} Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.411928 4928 generic.go:334] "Generic (PLEG): container finished" podID="8b9405dc-ca44-4909-a7ca-499354e67172" containerID="2f57bc251f98ce0e1e22ad8945d2f27139bc8bcdbff33fd206d82acf77078624" exitCode=0 Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.412026 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rscpw" event={"ID":"8b9405dc-ca44-4909-a7ca-499354e67172","Type":"ContainerDied","Data":"2f57bc251f98ce0e1e22ad8945d2f27139bc8bcdbff33fd206d82acf77078624"} Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.423834 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnvn4" event={"ID":"b3d779fd-e36a-44a9-93d2-266fc054b0f9","Type":"ContainerStarted","Data":"83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e"} Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.490666 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wnvn4" podStartSLOduration=3.566280689 podStartE2EDuration="1m20.490650881s" podCreationTimestamp="2025-11-22 00:08:27 +0000 UTC" firstStartedPulling="2025-11-22 00:08:29.531218444 +0000 UTC m=+144.819249635" lastFinishedPulling="2025-11-22 00:09:46.455588636 +0000 UTC m=+221.743619827" observedRunningTime="2025-11-22 00:09:47.487361541 +0000 UTC m=+222.775392742" watchObservedRunningTime="2025-11-22 00:09:47.490650881 +0000 UTC m=+222.778682072" Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.625732 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" path="/var/lib/kubelet/pods/4f09254e-9e39-40fb-a69e-42d933b4cf7e/volumes" Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.952642 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:09:47 crc kubenswrapper[4928]: I1122 00:09:47.952830 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:09:48 crc kubenswrapper[4928]: I1122 00:09:48.445934 4928 generic.go:334] "Generic (PLEG): container finished" podID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerID="fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4" exitCode=0 Nov 22 00:09:48 crc kubenswrapper[4928]: I1122 00:09:48.446821 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhl2z" event={"ID":"f0703e2f-994d-4ba3-8461-1c90ae1a063d","Type":"ContainerDied","Data":"fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4"} Nov 22 00:09:49 crc kubenswrapper[4928]: I1122 00:09:49.003508 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wnvn4" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="registry-server" probeResult="failure" output=< Nov 22 00:09:49 crc kubenswrapper[4928]: timeout: failed to connect service ":50051" within 1s Nov 22 00:09:49 crc kubenswrapper[4928]: > Nov 22 00:09:58 crc kubenswrapper[4928]: I1122 00:09:58.005152 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:09:58 crc kubenswrapper[4928]: I1122 00:09:58.067579 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.640343 4928 generic.go:334] "Generic (PLEG): container finished" podID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerID="3ef6c61a0633174816468862b18a2bb2e14cc77799200ea5f9ac37fcb91d74bd" exitCode=0 Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.640438 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vbgf" event={"ID":"61f75f23-c9ca-453e-a2e4-6d78939022db","Type":"ContainerDied","Data":"3ef6c61a0633174816468862b18a2bb2e14cc77799200ea5f9ac37fcb91d74bd"} Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.645019 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rscpw" event={"ID":"8b9405dc-ca44-4909-a7ca-499354e67172","Type":"ContainerStarted","Data":"05acad2c2b42378f7b44f428e2a04812f75063b3278beff28c280a67342d7faa"} Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.647303 4928 generic.go:334] "Generic (PLEG): container finished" podID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerID="cfee5aa36cc36185a99f05dcd9754394f1a83bddabe2c1b1147aa5f99ca5b7c2" exitCode=0 Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.647358 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgmwk" event={"ID":"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb","Type":"ContainerDied","Data":"cfee5aa36cc36185a99f05dcd9754394f1a83bddabe2c1b1147aa5f99ca5b7c2"} Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.649193 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrw57" event={"ID":"09021957-55db-401c-a705-d3d308653cc9","Type":"ContainerStarted","Data":"2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c"} Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.652899 4928 generic.go:334] "Generic (PLEG): container finished" podID="0caf4878-137a-405f-9173-3187372a5135" containerID="4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec" exitCode=0 Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.652941 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd5kl" event={"ID":"0caf4878-137a-405f-9173-3187372a5135","Type":"ContainerDied","Data":"4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec"} Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.654629 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhl2z" event={"ID":"f0703e2f-994d-4ba3-8461-1c90ae1a063d","Type":"ContainerStarted","Data":"69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1"} Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.690626 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hhl2z" podStartSLOduration=11.93830619 podStartE2EDuration="1m48.690603639s" podCreationTimestamp="2025-11-22 00:08:30 +0000 UTC" firstStartedPulling="2025-11-22 00:08:32.764439598 +0000 UTC m=+148.052470789" lastFinishedPulling="2025-11-22 00:10:09.516737007 +0000 UTC m=+244.804768238" observedRunningTime="2025-11-22 00:10:18.690581178 +0000 UTC m=+253.978612369" watchObservedRunningTime="2025-11-22 00:10:18.690603639 +0000 UTC m=+253.978634840" Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.714972 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rscpw" podStartSLOduration=3.840322771 podStartE2EDuration="1m51.714954942s" podCreationTimestamp="2025-11-22 00:08:27 +0000 UTC" firstStartedPulling="2025-11-22 00:08:29.522498118 +0000 UTC m=+144.810529309" lastFinishedPulling="2025-11-22 00:10:17.397130269 +0000 UTC m=+252.685161480" observedRunningTime="2025-11-22 00:10:18.712049338 +0000 UTC m=+254.000080539" watchObservedRunningTime="2025-11-22 00:10:18.714954942 +0000 UTC m=+254.002986133" Nov 22 00:10:18 crc kubenswrapper[4928]: I1122 00:10:18.734979 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rrw57" podStartSLOduration=3.891225091 podStartE2EDuration="1m51.73496281s" podCreationTimestamp="2025-11-22 00:08:27 +0000 UTC" firstStartedPulling="2025-11-22 00:08:29.553725069 +0000 UTC m=+144.841756260" lastFinishedPulling="2025-11-22 00:10:17.397462778 +0000 UTC m=+252.685493979" observedRunningTime="2025-11-22 00:10:18.732283192 +0000 UTC m=+254.020314383" watchObservedRunningTime="2025-11-22 00:10:18.73496281 +0000 UTC m=+254.022993991" Nov 22 00:10:20 crc kubenswrapper[4928]: I1122 00:10:20.669265 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vbgf" event={"ID":"61f75f23-c9ca-453e-a2e4-6d78939022db","Type":"ContainerStarted","Data":"4f5be1d028d8c9f4a84ea6097167e51a1e702f795e8c9f05283b0977719d18f6"} Nov 22 00:10:20 crc kubenswrapper[4928]: I1122 00:10:20.692542 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7vbgf" podStartSLOduration=3.450939912 podStartE2EDuration="1m53.692525736s" podCreationTimestamp="2025-11-22 00:08:27 +0000 UTC" firstStartedPulling="2025-11-22 00:08:29.516926498 +0000 UTC m=+144.804957689" lastFinishedPulling="2025-11-22 00:10:19.758512322 +0000 UTC m=+255.046543513" observedRunningTime="2025-11-22 00:10:20.690011193 +0000 UTC m=+255.978042424" watchObservedRunningTime="2025-11-22 00:10:20.692525736 +0000 UTC m=+255.980556927" Nov 22 00:10:20 crc kubenswrapper[4928]: I1122 00:10:20.986452 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:10:20 crc kubenswrapper[4928]: I1122 00:10:20.986914 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:10:21 crc kubenswrapper[4928]: I1122 00:10:21.676620 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgmwk" event={"ID":"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb","Type":"ContainerStarted","Data":"226a6051ff762e3f2298877b8ef5ce49f6dd24152f2f48ee9c2f736bf8fa9fd3"} Nov 22 00:10:21 crc kubenswrapper[4928]: I1122 00:10:21.696339 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dgmwk" podStartSLOduration=3.101790414 podStartE2EDuration="1m51.696321107s" podCreationTimestamp="2025-11-22 00:08:30 +0000 UTC" firstStartedPulling="2025-11-22 00:08:32.770394489 +0000 UTC m=+148.058425690" lastFinishedPulling="2025-11-22 00:10:21.364925202 +0000 UTC m=+256.652956383" observedRunningTime="2025-11-22 00:10:21.693946808 +0000 UTC m=+256.981978009" watchObservedRunningTime="2025-11-22 00:10:21.696321107 +0000 UTC m=+256.984352298" Nov 22 00:10:22 crc kubenswrapper[4928]: I1122 00:10:22.031267 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hhl2z" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="registry-server" probeResult="failure" output=< Nov 22 00:10:22 crc kubenswrapper[4928]: timeout: failed to connect service ":50051" within 1s Nov 22 00:10:22 crc kubenswrapper[4928]: > Nov 22 00:10:22 crc kubenswrapper[4928]: I1122 00:10:22.684418 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd5kl" event={"ID":"0caf4878-137a-405f-9173-3187372a5135","Type":"ContainerStarted","Data":"6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424"} Nov 22 00:10:22 crc kubenswrapper[4928]: I1122 00:10:22.707410 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hd5kl" podStartSLOduration=2.521158994 podStartE2EDuration="1m53.707394997s" podCreationTimestamp="2025-11-22 00:08:29 +0000 UTC" firstStartedPulling="2025-11-22 00:08:30.68233342 +0000 UTC m=+145.970364611" lastFinishedPulling="2025-11-22 00:10:21.868569423 +0000 UTC m=+257.156600614" observedRunningTime="2025-11-22 00:10:22.703166875 +0000 UTC m=+257.991198076" watchObservedRunningTime="2025-11-22 00:10:22.707394997 +0000 UTC m=+257.995426188" Nov 22 00:10:27 crc kubenswrapper[4928]: I1122 00:10:27.720718 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:10:27 crc kubenswrapper[4928]: I1122 00:10:27.721698 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:10:27 crc kubenswrapper[4928]: I1122 00:10:27.761760 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.228152 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.228187 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.273202 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.375131 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.375185 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.419655 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.761934 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.762569 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:10:28 crc kubenswrapper[4928]: I1122 00:10:28.762625 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:10:29 crc kubenswrapper[4928]: I1122 00:10:29.482868 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rscpw"] Nov 22 00:10:30 crc kubenswrapper[4928]: I1122 00:10:30.104667 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:10:30 crc kubenswrapper[4928]: I1122 00:10:30.104712 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:10:30 crc kubenswrapper[4928]: I1122 00:10:30.167750 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:10:30 crc kubenswrapper[4928]: I1122 00:10:30.484166 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7vbgf"] Nov 22 00:10:30 crc kubenswrapper[4928]: I1122 00:10:30.726467 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7vbgf" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="registry-server" containerID="cri-o://4f5be1d028d8c9f4a84ea6097167e51a1e702f795e8c9f05283b0977719d18f6" gracePeriod=2 Nov 22 00:10:30 crc kubenswrapper[4928]: I1122 00:10:30.726588 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rscpw" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="registry-server" containerID="cri-o://05acad2c2b42378f7b44f428e2a04812f75063b3278beff28c280a67342d7faa" gracePeriod=2 Nov 22 00:10:30 crc kubenswrapper[4928]: I1122 00:10:30.770689 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:10:31 crc kubenswrapper[4928]: I1122 00:10:31.027985 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:10:31 crc kubenswrapper[4928]: I1122 00:10:31.076390 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:10:31 crc kubenswrapper[4928]: I1122 00:10:31.148729 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:10:31 crc kubenswrapper[4928]: I1122 00:10:31.148777 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:10:31 crc kubenswrapper[4928]: I1122 00:10:31.195661 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:10:31 crc kubenswrapper[4928]: I1122 00:10:31.801111 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:10:32 crc kubenswrapper[4928]: I1122 00:10:32.326599 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tmdvd"] Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.756450 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rscpw_8b9405dc-ca44-4909-a7ca-499354e67172/registry-server/0.log" Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.757515 4928 generic.go:334] "Generic (PLEG): container finished" podID="8b9405dc-ca44-4909-a7ca-499354e67172" containerID="05acad2c2b42378f7b44f428e2a04812f75063b3278beff28c280a67342d7faa" exitCode=137 Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.757579 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rscpw" event={"ID":"8b9405dc-ca44-4909-a7ca-499354e67172","Type":"ContainerDied","Data":"05acad2c2b42378f7b44f428e2a04812f75063b3278beff28c280a67342d7faa"} Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.759133 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vbgf_61f75f23-c9ca-453e-a2e4-6d78939022db/registry-server/0.log" Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.759735 4928 generic.go:334] "Generic (PLEG): container finished" podID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerID="4f5be1d028d8c9f4a84ea6097167e51a1e702f795e8c9f05283b0977719d18f6" exitCode=137 Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.759759 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vbgf" event={"ID":"61f75f23-c9ca-453e-a2e4-6d78939022db","Type":"ContainerDied","Data":"4f5be1d028d8c9f4a84ea6097167e51a1e702f795e8c9f05283b0977719d18f6"} Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.882953 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgmwk"] Nov 22 00:10:34 crc kubenswrapper[4928]: I1122 00:10:34.883204 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dgmwk" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="registry-server" containerID="cri-o://226a6051ff762e3f2298877b8ef5ce49f6dd24152f2f48ee9c2f736bf8fa9fd3" gracePeriod=2 Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.577323 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rscpw_8b9405dc-ca44-4909-a7ca-499354e67172/registry-server/0.log" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.578350 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.723255 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llnph\" (UniqueName: \"kubernetes.io/projected/8b9405dc-ca44-4909-a7ca-499354e67172-kube-api-access-llnph\") pod \"8b9405dc-ca44-4909-a7ca-499354e67172\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.723349 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-catalog-content\") pod \"8b9405dc-ca44-4909-a7ca-499354e67172\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.723430 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-utilities\") pod \"8b9405dc-ca44-4909-a7ca-499354e67172\" (UID: \"8b9405dc-ca44-4909-a7ca-499354e67172\") " Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.724525 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-utilities" (OuterVolumeSpecName: "utilities") pod "8b9405dc-ca44-4909-a7ca-499354e67172" (UID: "8b9405dc-ca44-4909-a7ca-499354e67172"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.732588 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9405dc-ca44-4909-a7ca-499354e67172-kube-api-access-llnph" (OuterVolumeSpecName: "kube-api-access-llnph") pod "8b9405dc-ca44-4909-a7ca-499354e67172" (UID: "8b9405dc-ca44-4909-a7ca-499354e67172"). InnerVolumeSpecName "kube-api-access-llnph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.772198 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rscpw_8b9405dc-ca44-4909-a7ca-499354e67172/registry-server/0.log" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.772859 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rscpw" event={"ID":"8b9405dc-ca44-4909-a7ca-499354e67172","Type":"ContainerDied","Data":"cc6c72809e1f1e377e150b06720e6667c16b77a768068d8ae553db2fcd8fd7a4"} Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.772921 4928 scope.go:117] "RemoveContainer" containerID="05acad2c2b42378f7b44f428e2a04812f75063b3278beff28c280a67342d7faa" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.773067 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rscpw" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.776411 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b9405dc-ca44-4909-a7ca-499354e67172" (UID: "8b9405dc-ca44-4909-a7ca-499354e67172"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.799164 4928 scope.go:117] "RemoveContainer" containerID="2f57bc251f98ce0e1e22ad8945d2f27139bc8bcdbff33fd206d82acf77078624" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.821408 4928 scope.go:117] "RemoveContainer" containerID="ca588e0609f6eee7806e03ee6890e65ece21e26a963663fd25f059de905e9623" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.824457 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.824497 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llnph\" (UniqueName: \"kubernetes.io/projected/8b9405dc-ca44-4909-a7ca-499354e67172-kube-api-access-llnph\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:36 crc kubenswrapper[4928]: I1122 00:10:36.824513 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b9405dc-ca44-4909-a7ca-499354e67172-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.123889 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rscpw"] Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.130816 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rscpw"] Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.175182 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vbgf_61f75f23-c9ca-453e-a2e4-6d78939022db/registry-server/0.log" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.175770 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.331719 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-catalog-content\") pod \"61f75f23-c9ca-453e-a2e4-6d78939022db\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.331828 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-utilities\") pod \"61f75f23-c9ca-453e-a2e4-6d78939022db\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.331861 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkqvm\" (UniqueName: \"kubernetes.io/projected/61f75f23-c9ca-453e-a2e4-6d78939022db-kube-api-access-wkqvm\") pod \"61f75f23-c9ca-453e-a2e4-6d78939022db\" (UID: \"61f75f23-c9ca-453e-a2e4-6d78939022db\") " Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.333529 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-utilities" (OuterVolumeSpecName: "utilities") pod "61f75f23-c9ca-453e-a2e4-6d78939022db" (UID: "61f75f23-c9ca-453e-a2e4-6d78939022db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.340190 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f75f23-c9ca-453e-a2e4-6d78939022db-kube-api-access-wkqvm" (OuterVolumeSpecName: "kube-api-access-wkqvm") pod "61f75f23-c9ca-453e-a2e4-6d78939022db" (UID: "61f75f23-c9ca-453e-a2e4-6d78939022db"). InnerVolumeSpecName "kube-api-access-wkqvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.433327 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.433359 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkqvm\" (UniqueName: \"kubernetes.io/projected/61f75f23-c9ca-453e-a2e4-6d78939022db-kube-api-access-wkqvm\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.624930 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" path="/var/lib/kubelet/pods/8b9405dc-ca44-4909-a7ca-499354e67172/volumes" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.779659 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7vbgf_61f75f23-c9ca-453e-a2e4-6d78939022db/registry-server/0.log" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.780350 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vbgf" event={"ID":"61f75f23-c9ca-453e-a2e4-6d78939022db","Type":"ContainerDied","Data":"61a6a9723aa8a58a5338479f43b5ae2ec6ce27aafee648aed277e10bb48f3b2e"} Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.780394 4928 scope.go:117] "RemoveContainer" containerID="4f5be1d028d8c9f4a84ea6097167e51a1e702f795e8c9f05283b0977719d18f6" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.780503 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vbgf" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.796291 4928 scope.go:117] "RemoveContainer" containerID="3ef6c61a0633174816468862b18a2bb2e14cc77799200ea5f9ac37fcb91d74bd" Nov 22 00:10:37 crc kubenswrapper[4928]: I1122 00:10:37.823969 4928 scope.go:117] "RemoveContainer" containerID="fdde58316ad233a981402220994c9ec80a9bed9d813249ac51d1ff9fee2c9f6c" Nov 22 00:10:38 crc kubenswrapper[4928]: I1122 00:10:38.103323 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61f75f23-c9ca-453e-a2e4-6d78939022db" (UID: "61f75f23-c9ca-453e-a2e4-6d78939022db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:10:38 crc kubenswrapper[4928]: I1122 00:10:38.141676 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f75f23-c9ca-453e-a2e4-6d78939022db-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:38 crc kubenswrapper[4928]: I1122 00:10:38.405330 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7vbgf"] Nov 22 00:10:38 crc kubenswrapper[4928]: I1122 00:10:38.410478 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7vbgf"] Nov 22 00:10:39 crc kubenswrapper[4928]: I1122 00:10:39.625694 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" path="/var/lib/kubelet/pods/61f75f23-c9ca-453e-a2e4-6d78939022db/volumes" Nov 22 00:10:39 crc kubenswrapper[4928]: I1122 00:10:39.792117 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dgmwk_9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb/registry-server/0.log" Nov 22 00:10:39 crc kubenswrapper[4928]: I1122 00:10:39.792933 4928 generic.go:334] "Generic (PLEG): container finished" podID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerID="226a6051ff762e3f2298877b8ef5ce49f6dd24152f2f48ee9c2f736bf8fa9fd3" exitCode=137 Nov 22 00:10:39 crc kubenswrapper[4928]: I1122 00:10:39.792979 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgmwk" event={"ID":"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb","Type":"ContainerDied","Data":"226a6051ff762e3f2298877b8ef5ce49f6dd24152f2f48ee9c2f736bf8fa9fd3"} Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.524248 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dgmwk_9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb/registry-server/0.log" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.525303 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.672701 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grmbq\" (UniqueName: \"kubernetes.io/projected/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-kube-api-access-grmbq\") pod \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.673468 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-utilities\") pod \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.673525 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-catalog-content\") pod \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\" (UID: \"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb\") " Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.674439 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-utilities" (OuterVolumeSpecName: "utilities") pod "9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" (UID: "9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.680253 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-kube-api-access-grmbq" (OuterVolumeSpecName: "kube-api-access-grmbq") pod "9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" (UID: "9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb"). InnerVolumeSpecName "kube-api-access-grmbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.776244 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grmbq\" (UniqueName: \"kubernetes.io/projected/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-kube-api-access-grmbq\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.776272 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.798601 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dgmwk_9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb/registry-server/0.log" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.800022 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgmwk" event={"ID":"9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb","Type":"ContainerDied","Data":"47f20cc533869313fd6d5517bffa6a783377997ced2e2a82a867ec2673f4c9f0"} Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.800070 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgmwk" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.800073 4928 scope.go:117] "RemoveContainer" containerID="226a6051ff762e3f2298877b8ef5ce49f6dd24152f2f48ee9c2f736bf8fa9fd3" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.815341 4928 scope.go:117] "RemoveContainer" containerID="cfee5aa36cc36185a99f05dcd9754394f1a83bddabe2c1b1147aa5f99ca5b7c2" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.841885 4928 scope.go:117] "RemoveContainer" containerID="bcb8c451a9d7b01d32d5963965c526bfe01a837c0c946aa33c5fe2abc94c1116" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.914963 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" (UID: "9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:10:40 crc kubenswrapper[4928]: I1122 00:10:40.977961 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:41 crc kubenswrapper[4928]: I1122 00:10:41.149495 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgmwk"] Nov 22 00:10:41 crc kubenswrapper[4928]: I1122 00:10:41.153503 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dgmwk"] Nov 22 00:10:41 crc kubenswrapper[4928]: I1122 00:10:41.625991 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" path="/var/lib/kubelet/pods/9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb/volumes" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.358359 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" podUID="42bd19a9-be1a-4c00-bf19-536e55a1ae11" containerName="oauth-openshift" containerID="cri-o://573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d" gracePeriod=15 Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.708847 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.748816 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79d99db75d-zgrmq"] Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750401 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750440 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750477 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750556 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750567 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750577 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750591 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750603 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750612 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750620 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750630 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750638 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750645 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750653 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750664 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750671 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="extract-utilities" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750681 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750689 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750699 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750707 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750715 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750722 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750734 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750741 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="extract-content" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750750 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760e859b-a35f-4663-8c09-5a1225237ecd" containerName="pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750757 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e859b-a35f-4663-8c09-5a1225237ecd" containerName="pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750768 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bd19a9-be1a-4c00-bf19-536e55a1ae11" containerName="oauth-openshift" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750776 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bd19a9-be1a-4c00-bf19-536e55a1ae11" containerName="oauth-openshift" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750802 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e120bf-63dd-4299-b548-9c0afc78bc65" containerName="pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750810 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e120bf-63dd-4299-b548-9c0afc78bc65" containerName="pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.750823 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10b9e4e-381b-425e-a1ce-3615a7580960" containerName="image-pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750833 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10b9e4e-381b-425e-a1ce-3615a7580960" containerName="image-pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750962 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f09254e-9e39-40fb-a69e-42d933b4cf7e" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750986 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e120bf-63dd-4299-b548-9c0afc78bc65" containerName="pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.750997 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f75f23-c9ca-453e-a2e4-6d78939022db" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.751008 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0f4538-fdbd-48ad-9ee8-05eb7dc60fcb" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.751021 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bd19a9-be1a-4c00-bf19-536e55a1ae11" containerName="oauth-openshift" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.751030 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9405dc-ca44-4909-a7ca-499354e67172" containerName="registry-server" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.751039 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="760e859b-a35f-4663-8c09-5a1225237ecd" containerName="pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.751049 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10b9e4e-381b-425e-a1ce-3615a7580960" containerName="image-pruner" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.751898 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.756146 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79d99db75d-zgrmq"] Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.902058 4928 generic.go:334] "Generic (PLEG): container finished" podID="42bd19a9-be1a-4c00-bf19-536e55a1ae11" containerID="573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d" exitCode=0 Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.902110 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" event={"ID":"42bd19a9-be1a-4c00-bf19-536e55a1ae11","Type":"ContainerDied","Data":"573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d"} Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.902166 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.902182 4928 scope.go:117] "RemoveContainer" containerID="573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.902170 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tmdvd" event={"ID":"42bd19a9-be1a-4c00-bf19-536e55a1ae11","Type":"ContainerDied","Data":"ccc4055435ffae915b31b3deac007d11a9d1917cd7ee8c31634aed9f7a1676fe"} Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910682 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-router-certs\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910740 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-trusted-ca-bundle\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910782 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-cliconfig\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910836 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-login\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910886 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-provider-selection\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910917 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wdpp\" (UniqueName: \"kubernetes.io/projected/42bd19a9-be1a-4c00-bf19-536e55a1ae11-kube-api-access-7wdpp\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910951 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-dir\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.910998 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-policies\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911034 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-service-ca\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911066 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-session\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911105 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-serving-cert\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911140 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-error\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911169 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-ocp-branding-template\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911200 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-idp-0-file-data\") pod \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\" (UID: \"42bd19a9-be1a-4c00-bf19-536e55a1ae11\") " Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911396 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911432 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911453 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-router-certs\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911501 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-error\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911531 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911560 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911593 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-audit-policies\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911616 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6vdq\" (UniqueName: \"kubernetes.io/projected/0026117a-833b-4930-a3fa-d97e20f8d507-kube-api-access-q6vdq\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911636 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0026117a-833b-4930-a3fa-d97e20f8d507-audit-dir\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911662 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-login\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911692 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911724 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-session\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911756 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-service-ca\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911824 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911860 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911915 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.911985 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.912023 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.912528 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.917754 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.918158 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bd19a9-be1a-4c00-bf19-536e55a1ae11-kube-api-access-7wdpp" (OuterVolumeSpecName: "kube-api-access-7wdpp") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "kube-api-access-7wdpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.921854 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.931373 4928 scope.go:117] "RemoveContainer" containerID="573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d" Nov 22 00:10:57 crc kubenswrapper[4928]: E1122 00:10:57.932169 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d\": container with ID starting with 573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d not found: ID does not exist" containerID="573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.932243 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d"} err="failed to get container status \"573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d\": rpc error: code = NotFound desc = could not find container \"573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d\": container with ID starting with 573cf95514cd9ad9636f9ebc08c29ac3cd70236c8b5ddd228e2cd5a388be472d not found: ID does not exist" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.934875 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.935124 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.935499 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.935991 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.936320 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:57 crc kubenswrapper[4928]: I1122 00:10:57.936906 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "42bd19a9-be1a-4c00-bf19-536e55a1ae11" (UID: "42bd19a9-be1a-4c00-bf19-536e55a1ae11"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012520 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-error\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012570 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012602 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012637 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-audit-policies\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012657 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6vdq\" (UniqueName: \"kubernetes.io/projected/0026117a-833b-4930-a3fa-d97e20f8d507-kube-api-access-q6vdq\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012679 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0026117a-833b-4930-a3fa-d97e20f8d507-audit-dir\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012725 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-login\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012755 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012784 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-session\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012835 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-service-ca\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012871 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012902 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012955 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-router-certs\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.012979 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013035 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013050 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013064 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013076 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wdpp\" (UniqueName: \"kubernetes.io/projected/42bd19a9-be1a-4c00-bf19-536e55a1ae11-kube-api-access-7wdpp\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013089 4928 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013100 4928 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013114 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013126 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013138 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013151 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013163 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013177 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013189 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013182 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0026117a-833b-4930-a3fa-d97e20f8d507-audit-dir\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.013203 4928 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42bd19a9-be1a-4c00-bf19-536e55a1ae11-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.014282 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-audit-policies\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.014349 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.014755 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.014913 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-service-ca\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.016474 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.016514 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-session\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.017214 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.017493 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-router-certs\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.017883 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.019292 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-error\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.019371 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-user-template-login\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.020523 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0026117a-833b-4930-a3fa-d97e20f8d507-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.032433 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6vdq\" (UniqueName: \"kubernetes.io/projected/0026117a-833b-4930-a3fa-d97e20f8d507-kube-api-access-q6vdq\") pod \"oauth-openshift-79d99db75d-zgrmq\" (UID: \"0026117a-833b-4930-a3fa-d97e20f8d507\") " pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.083546 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.245273 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tmdvd"] Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.248871 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tmdvd"] Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.535696 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79d99db75d-zgrmq"] Nov 22 00:10:58 crc kubenswrapper[4928]: I1122 00:10:58.914586 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" event={"ID":"0026117a-833b-4930-a3fa-d97e20f8d507","Type":"ContainerStarted","Data":"31cad3e412b3400546beff13fe032046da8f5e68b287fc8066f4e6b66060ae07"} Nov 22 00:10:59 crc kubenswrapper[4928]: I1122 00:10:59.625187 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42bd19a9-be1a-4c00-bf19-536e55a1ae11" path="/var/lib/kubelet/pods/42bd19a9-be1a-4c00-bf19-536e55a1ae11/volumes" Nov 22 00:10:59 crc kubenswrapper[4928]: I1122 00:10:59.921142 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" event={"ID":"0026117a-833b-4930-a3fa-d97e20f8d507","Type":"ContainerStarted","Data":"7c92b382b878d73829003c7fe2deb2c6f8b5abf23566690ad5fab3765f07f8df"} Nov 22 00:10:59 crc kubenswrapper[4928]: I1122 00:10:59.921908 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:10:59 crc kubenswrapper[4928]: I1122 00:10:59.941756 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" podStartSLOduration=27.941740546 podStartE2EDuration="27.941740546s" podCreationTimestamp="2025-11-22 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:10:59.941133901 +0000 UTC m=+295.229165112" watchObservedRunningTime="2025-11-22 00:10:59.941740546 +0000 UTC m=+295.229771737" Nov 22 00:11:00 crc kubenswrapper[4928]: I1122 00:11:00.280338 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79d99db75d-zgrmq" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.003936 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnvn4"] Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.006211 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wnvn4" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="registry-server" containerID="cri-o://83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e" gracePeriod=30 Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.011927 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrw57"] Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.012210 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rrw57" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="registry-server" containerID="cri-o://2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c" gracePeriod=30 Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.021034 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54l8n"] Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.021310 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" podUID="db624f52-ab96-42e2-a54c-62c2abc36274" containerName="marketplace-operator" containerID="cri-o://d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df" gracePeriod=30 Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.029058 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd5kl"] Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.029346 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hd5kl" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="registry-server" containerID="cri-o://6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424" gracePeriod=30 Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.040286 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhl2z"] Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.040536 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hhl2z" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="registry-server" containerID="cri-o://69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1" gracePeriod=30 Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.049942 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8v67p"] Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.050618 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.065396 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8v67p"] Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.110649 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6kb2\" (UniqueName: \"kubernetes.io/projected/19f06771-3add-4b82-a14e-a3395bb04a75-kube-api-access-r6kb2\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.110721 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19f06771-3add-4b82-a14e-a3395bb04a75-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.110742 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19f06771-3add-4b82-a14e-a3395bb04a75-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.211619 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6kb2\" (UniqueName: \"kubernetes.io/projected/19f06771-3add-4b82-a14e-a3395bb04a75-kube-api-access-r6kb2\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.211674 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19f06771-3add-4b82-a14e-a3395bb04a75-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.211693 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19f06771-3add-4b82-a14e-a3395bb04a75-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.213152 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19f06771-3add-4b82-a14e-a3395bb04a75-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.223735 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/19f06771-3add-4b82-a14e-a3395bb04a75-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.227455 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6kb2\" (UniqueName: \"kubernetes.io/projected/19f06771-3add-4b82-a14e-a3395bb04a75-kube-api-access-r6kb2\") pod \"marketplace-operator-79b997595-8v67p\" (UID: \"19f06771-3add-4b82-a14e-a3395bb04a75\") " pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.374023 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.457296 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.458736 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.537868 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.588920 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619378 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz2vc\" (UniqueName: \"kubernetes.io/projected/09021957-55db-401c-a705-d3d308653cc9-kube-api-access-hz2vc\") pod \"09021957-55db-401c-a705-d3d308653cc9\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619444 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-catalog-content\") pod \"09021957-55db-401c-a705-d3d308653cc9\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619620 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6gcs\" (UniqueName: \"kubernetes.io/projected/b3d779fd-e36a-44a9-93d2-266fc054b0f9-kube-api-access-q6gcs\") pod \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619640 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-catalog-content\") pod \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619665 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-utilities\") pod \"09021957-55db-401c-a705-d3d308653cc9\" (UID: \"09021957-55db-401c-a705-d3d308653cc9\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619691 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-trusted-ca\") pod \"db624f52-ab96-42e2-a54c-62c2abc36274\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619709 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-operator-metrics\") pod \"db624f52-ab96-42e2-a54c-62c2abc36274\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619727 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-catalog-content\") pod \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619746 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-utilities\") pod \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\" (UID: \"b3d779fd-e36a-44a9-93d2-266fc054b0f9\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.619769 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq4ks\" (UniqueName: \"kubernetes.io/projected/f0703e2f-994d-4ba3-8461-1c90ae1a063d-kube-api-access-dq4ks\") pod \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.620914 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-utilities" (OuterVolumeSpecName: "utilities") pod "09021957-55db-401c-a705-d3d308653cc9" (UID: "09021957-55db-401c-a705-d3d308653cc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.621197 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-utilities" (OuterVolumeSpecName: "utilities") pod "b3d779fd-e36a-44a9-93d2-266fc054b0f9" (UID: "b3d779fd-e36a-44a9-93d2-266fc054b0f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.621346 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "db624f52-ab96-42e2-a54c-62c2abc36274" (UID: "db624f52-ab96-42e2-a54c-62c2abc36274"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.625523 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d779fd-e36a-44a9-93d2-266fc054b0f9-kube-api-access-q6gcs" (OuterVolumeSpecName: "kube-api-access-q6gcs") pod "b3d779fd-e36a-44a9-93d2-266fc054b0f9" (UID: "b3d779fd-e36a-44a9-93d2-266fc054b0f9"). InnerVolumeSpecName "kube-api-access-q6gcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.625606 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0703e2f-994d-4ba3-8461-1c90ae1a063d-kube-api-access-dq4ks" (OuterVolumeSpecName: "kube-api-access-dq4ks") pod "f0703e2f-994d-4ba3-8461-1c90ae1a063d" (UID: "f0703e2f-994d-4ba3-8461-1c90ae1a063d"). InnerVolumeSpecName "kube-api-access-dq4ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.627319 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09021957-55db-401c-a705-d3d308653cc9-kube-api-access-hz2vc" (OuterVolumeSpecName: "kube-api-access-hz2vc") pod "09021957-55db-401c-a705-d3d308653cc9" (UID: "09021957-55db-401c-a705-d3d308653cc9"). InnerVolumeSpecName "kube-api-access-hz2vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.627383 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "db624f52-ab96-42e2-a54c-62c2abc36274" (UID: "db624f52-ab96-42e2-a54c-62c2abc36274"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.682455 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.702337 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3d779fd-e36a-44a9-93d2-266fc054b0f9" (UID: "b3d779fd-e36a-44a9-93d2-266fc054b0f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.709644 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09021957-55db-401c-a705-d3d308653cc9" (UID: "09021957-55db-401c-a705-d3d308653cc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.720505 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdrth\" (UniqueName: \"kubernetes.io/projected/db624f52-ab96-42e2-a54c-62c2abc36274-kube-api-access-sdrth\") pod \"db624f52-ab96-42e2-a54c-62c2abc36274\" (UID: \"db624f52-ab96-42e2-a54c-62c2abc36274\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.720566 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-utilities\") pod \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\" (UID: \"f0703e2f-994d-4ba3-8461-1c90ae1a063d\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721104 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6gcs\" (UniqueName: \"kubernetes.io/projected/b3d779fd-e36a-44a9-93d2-266fc054b0f9-kube-api-access-q6gcs\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721132 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721143 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721152 4928 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721162 4928 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db624f52-ab96-42e2-a54c-62c2abc36274-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721171 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3d779fd-e36a-44a9-93d2-266fc054b0f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721206 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq4ks\" (UniqueName: \"kubernetes.io/projected/f0703e2f-994d-4ba3-8461-1c90ae1a063d-kube-api-access-dq4ks\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721240 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz2vc\" (UniqueName: \"kubernetes.io/projected/09021957-55db-401c-a705-d3d308653cc9-kube-api-access-hz2vc\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721251 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09021957-55db-401c-a705-d3d308653cc9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.721897 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-utilities" (OuterVolumeSpecName: "utilities") pod "f0703e2f-994d-4ba3-8461-1c90ae1a063d" (UID: "f0703e2f-994d-4ba3-8461-1c90ae1a063d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.728316 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db624f52-ab96-42e2-a54c-62c2abc36274-kube-api-access-sdrth" (OuterVolumeSpecName: "kube-api-access-sdrth") pod "db624f52-ab96-42e2-a54c-62c2abc36274" (UID: "db624f52-ab96-42e2-a54c-62c2abc36274"). InnerVolumeSpecName "kube-api-access-sdrth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.742776 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0703e2f-994d-4ba3-8461-1c90ae1a063d" (UID: "f0703e2f-994d-4ba3-8461-1c90ae1a063d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.744202 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8v67p"] Nov 22 00:11:13 crc kubenswrapper[4928]: W1122 00:11:13.746121 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19f06771_3add_4b82_a14e_a3395bb04a75.slice/crio-020e60cdc8778f00ff37d802caf88d0cd7de305e55c31fdbeebb6e6c88dbce90 WatchSource:0}: Error finding container 020e60cdc8778f00ff37d802caf88d0cd7de305e55c31fdbeebb6e6c88dbce90: Status 404 returned error can't find the container with id 020e60cdc8778f00ff37d802caf88d0cd7de305e55c31fdbeebb6e6c88dbce90 Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.822085 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcbsf\" (UniqueName: \"kubernetes.io/projected/0caf4878-137a-405f-9173-3187372a5135-kube-api-access-rcbsf\") pod \"0caf4878-137a-405f-9173-3187372a5135\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.822913 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-catalog-content\") pod \"0caf4878-137a-405f-9173-3187372a5135\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.825076 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-utilities\") pod \"0caf4878-137a-405f-9173-3187372a5135\" (UID: \"0caf4878-137a-405f-9173-3187372a5135\") " Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.825851 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.825885 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdrth\" (UniqueName: \"kubernetes.io/projected/db624f52-ab96-42e2-a54c-62c2abc36274-kube-api-access-sdrth\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.825899 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0703e2f-994d-4ba3-8461-1c90ae1a063d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.825925 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-utilities" (OuterVolumeSpecName: "utilities") pod "0caf4878-137a-405f-9173-3187372a5135" (UID: "0caf4878-137a-405f-9173-3187372a5135"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.827172 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0caf4878-137a-405f-9173-3187372a5135-kube-api-access-rcbsf" (OuterVolumeSpecName: "kube-api-access-rcbsf") pod "0caf4878-137a-405f-9173-3187372a5135" (UID: "0caf4878-137a-405f-9173-3187372a5135"). InnerVolumeSpecName "kube-api-access-rcbsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.840039 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0caf4878-137a-405f-9173-3187372a5135" (UID: "0caf4878-137a-405f-9173-3187372a5135"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.927491 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.927525 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caf4878-137a-405f-9173-3187372a5135-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.927536 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcbsf\" (UniqueName: \"kubernetes.io/projected/0caf4878-137a-405f-9173-3187372a5135-kube-api-access-rcbsf\") on node \"crc\" DevicePath \"\"" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.998390 4928 generic.go:334] "Generic (PLEG): container finished" podID="09021957-55db-401c-a705-d3d308653cc9" containerID="2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c" exitCode=0 Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.998445 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrw57" Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.998457 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrw57" event={"ID":"09021957-55db-401c-a705-d3d308653cc9","Type":"ContainerDied","Data":"2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c"} Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.998482 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrw57" event={"ID":"09021957-55db-401c-a705-d3d308653cc9","Type":"ContainerDied","Data":"346ae5979c5e6aed80a33cb7f31c8d480dca4fbbea067b03ea834ff325ee11ad"} Nov 22 00:11:13 crc kubenswrapper[4928]: I1122 00:11:13.998499 4928 scope.go:117] "RemoveContainer" containerID="2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.002220 4928 generic.go:334] "Generic (PLEG): container finished" podID="0caf4878-137a-405f-9173-3187372a5135" containerID="6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424" exitCode=0 Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.002308 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd5kl" event={"ID":"0caf4878-137a-405f-9173-3187372a5135","Type":"ContainerDied","Data":"6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.002339 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hd5kl" event={"ID":"0caf4878-137a-405f-9173-3187372a5135","Type":"ContainerDied","Data":"487a3a715961dc71072fdf3b62df3ffed600b202f3a4baa7e9b2d8c6a7f8b47b"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.002413 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hd5kl" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.005286 4928 generic.go:334] "Generic (PLEG): container finished" podID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerID="69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1" exitCode=0 Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.005337 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhl2z" event={"ID":"f0703e2f-994d-4ba3-8461-1c90ae1a063d","Type":"ContainerDied","Data":"69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.005358 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhl2z" event={"ID":"f0703e2f-994d-4ba3-8461-1c90ae1a063d","Type":"ContainerDied","Data":"61e00fed36745915a14f8030df59f301cb196b10ed78f54dc4920b69126d3c65"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.005421 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhl2z" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.009166 4928 generic.go:334] "Generic (PLEG): container finished" podID="db624f52-ab96-42e2-a54c-62c2abc36274" containerID="d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df" exitCode=0 Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.009218 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" event={"ID":"db624f52-ab96-42e2-a54c-62c2abc36274","Type":"ContainerDied","Data":"d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.009239 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" event={"ID":"db624f52-ab96-42e2-a54c-62c2abc36274","Type":"ContainerDied","Data":"6889fc0ffb27228b5070008f50fe1b555791a72c4495b3f63d63e476e24d9574"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.009289 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-54l8n" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.012931 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" event={"ID":"19f06771-3add-4b82-a14e-a3395bb04a75","Type":"ContainerStarted","Data":"821a84c396738c5739cb8618cd011a4365c7d27148c76a1c373b7695bb2fed79"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.012998 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" event={"ID":"19f06771-3add-4b82-a14e-a3395bb04a75","Type":"ContainerStarted","Data":"020e60cdc8778f00ff37d802caf88d0cd7de305e55c31fdbeebb6e6c88dbce90"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.013120 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.014727 4928 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8v67p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.014766 4928 generic.go:334] "Generic (PLEG): container finished" podID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerID="83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e" exitCode=0 Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.014813 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnvn4" event={"ID":"b3d779fd-e36a-44a9-93d2-266fc054b0f9","Type":"ContainerDied","Data":"83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.014836 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnvn4" event={"ID":"b3d779fd-e36a-44a9-93d2-266fc054b0f9","Type":"ContainerDied","Data":"a735391ef56a702e5a446e0437046998253032fd194328663c7747d9aab434f6"} Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.014803 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" podUID="19f06771-3add-4b82-a14e-a3395bb04a75" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.014941 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnvn4" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.026432 4928 scope.go:117] "RemoveContainer" containerID="0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.052411 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" podStartSLOduration=1.052380588 podStartE2EDuration="1.052380588s" podCreationTimestamp="2025-11-22 00:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:11:14.045417326 +0000 UTC m=+309.333448527" watchObservedRunningTime="2025-11-22 00:11:14.052380588 +0000 UTC m=+309.340411779" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.058369 4928 scope.go:117] "RemoveContainer" containerID="72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.061334 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrw57"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.064962 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rrw57"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.080965 4928 scope.go:117] "RemoveContainer" containerID="2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.082211 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c\": container with ID starting with 2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c not found: ID does not exist" containerID="2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.082885 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c"} err="failed to get container status \"2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c\": rpc error: code = NotFound desc = could not find container \"2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c\": container with ID starting with 2e4cd5ec92eabc0d55429caed569229d3fd543b85a4683a1d76657f538a0e00c not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.082921 4928 scope.go:117] "RemoveContainer" containerID="0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.083205 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900\": container with ID starting with 0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900 not found: ID does not exist" containerID="0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.083233 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900"} err="failed to get container status \"0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900\": rpc error: code = NotFound desc = could not find container \"0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900\": container with ID starting with 0f86c0f71d51e89d72528e8859744bb21f3290133cd4fbb82190f29221709900 not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.083248 4928 scope.go:117] "RemoveContainer" containerID="72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.083513 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1\": container with ID starting with 72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1 not found: ID does not exist" containerID="72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.083564 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1"} err="failed to get container status \"72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1\": rpc error: code = NotFound desc = could not find container \"72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1\": container with ID starting with 72bbe4b4b203d04a2da8a9587d5075a389cc2de6a513f79e35e9de5d09b0e1a1 not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.083599 4928 scope.go:117] "RemoveContainer" containerID="6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.083634 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnvn4"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.096183 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wnvn4"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.097832 4928 scope.go:117] "RemoveContainer" containerID="4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.105504 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd5kl"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.113087 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hd5kl"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.115972 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54l8n"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.117189 4928 scope.go:117] "RemoveContainer" containerID="5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.118547 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54l8n"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.120836 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhl2z"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.123774 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hhl2z"] Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.130690 4928 scope.go:117] "RemoveContainer" containerID="6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.131247 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424\": container with ID starting with 6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424 not found: ID does not exist" containerID="6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.131272 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424"} err="failed to get container status \"6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424\": rpc error: code = NotFound desc = could not find container \"6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424\": container with ID starting with 6f57cdf2e6a3cb4f28bbd71115c28f3fc500dccf9c5346bd2cc3bea23a90b424 not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.131303 4928 scope.go:117] "RemoveContainer" containerID="4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.131552 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec\": container with ID starting with 4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec not found: ID does not exist" containerID="4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.131567 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec"} err="failed to get container status \"4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec\": rpc error: code = NotFound desc = could not find container \"4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec\": container with ID starting with 4c089a3f1a49403199fd931afe5006cd13964526e6730f6fb7f666e7109935ec not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.131579 4928 scope.go:117] "RemoveContainer" containerID="5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.131760 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f\": container with ID starting with 5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f not found: ID does not exist" containerID="5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.131774 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f"} err="failed to get container status \"5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f\": rpc error: code = NotFound desc = could not find container \"5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f\": container with ID starting with 5eae302f4a1cca843ab723c93b9f9c7f6e73a4f7660f3fd5ecb221cf48dba15f not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.131805 4928 scope.go:117] "RemoveContainer" containerID="69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.172164 4928 scope.go:117] "RemoveContainer" containerID="fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.187555 4928 scope.go:117] "RemoveContainer" containerID="c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.203713 4928 scope.go:117] "RemoveContainer" containerID="69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.204123 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1\": container with ID starting with 69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1 not found: ID does not exist" containerID="69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.204160 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1"} err="failed to get container status \"69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1\": rpc error: code = NotFound desc = could not find container \"69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1\": container with ID starting with 69f7434a4db814cb680cb64399a4bac7ea89cfcc4a9385b40f248109a1314ef1 not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.204187 4928 scope.go:117] "RemoveContainer" containerID="fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.204449 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4\": container with ID starting with fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4 not found: ID does not exist" containerID="fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.204479 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4"} err="failed to get container status \"fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4\": rpc error: code = NotFound desc = could not find container \"fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4\": container with ID starting with fde630a414a8022c871d81adb7092f3e3ee9591d5379bdf3f66fabf50bf7e6c4 not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.204497 4928 scope.go:117] "RemoveContainer" containerID="c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.205631 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8\": container with ID starting with c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8 not found: ID does not exist" containerID="c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.205662 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8"} err="failed to get container status \"c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8\": rpc error: code = NotFound desc = could not find container \"c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8\": container with ID starting with c8bbb8cc73f7aff26b610e8379985896d95326d52a2d87dc526fb5ca65b4f0a8 not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.205678 4928 scope.go:117] "RemoveContainer" containerID="d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.222529 4928 scope.go:117] "RemoveContainer" containerID="d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.222977 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df\": container with ID starting with d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df not found: ID does not exist" containerID="d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.223014 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df"} err="failed to get container status \"d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df\": rpc error: code = NotFound desc = could not find container \"d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df\": container with ID starting with d1bd781d185e15b4394f2613296f454a0919def47554a012ee572aa17c9558df not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.223036 4928 scope.go:117] "RemoveContainer" containerID="83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.237444 4928 scope.go:117] "RemoveContainer" containerID="d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.253785 4928 scope.go:117] "RemoveContainer" containerID="80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.271135 4928 scope.go:117] "RemoveContainer" containerID="83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.276292 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e\": container with ID starting with 83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e not found: ID does not exist" containerID="83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.276335 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e"} err="failed to get container status \"83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e\": rpc error: code = NotFound desc = could not find container \"83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e\": container with ID starting with 83a80fcd836e1dadd71754c27ce3d4b6357b7a3dafea96b37fb958a67ff7e15e not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.276365 4928 scope.go:117] "RemoveContainer" containerID="d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.276915 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69\": container with ID starting with d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69 not found: ID does not exist" containerID="d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.276951 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69"} err="failed to get container status \"d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69\": rpc error: code = NotFound desc = could not find container \"d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69\": container with ID starting with d18cfa6300799d05b4a29e7242e0922cef302467cd1374b2cfa30f4df18ddf69 not found: ID does not exist" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.276989 4928 scope.go:117] "RemoveContainer" containerID="80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf" Nov 22 00:11:14 crc kubenswrapper[4928]: E1122 00:11:14.277280 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf\": container with ID starting with 80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf not found: ID does not exist" containerID="80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf" Nov 22 00:11:14 crc kubenswrapper[4928]: I1122 00:11:14.277307 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf"} err="failed to get container status \"80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf\": rpc error: code = NotFound desc = could not find container \"80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf\": container with ID starting with 80036f143c3e639be2093758d4c011f5f11004e131192b9807a1e701455b04bf not found: ID does not exist" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.031298 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8v67p" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.226680 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vftb7"] Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227225 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227238 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227247 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227253 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227263 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227269 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227275 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db624f52-ab96-42e2-a54c-62c2abc36274" containerName="marketplace-operator" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227282 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="db624f52-ab96-42e2-a54c-62c2abc36274" containerName="marketplace-operator" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227289 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227295 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227308 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227313 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227323 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227329 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227338 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227344 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227352 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227358 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227364 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227369 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="extract-content" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227379 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227385 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227392 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227398 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="extract-utilities" Nov 22 00:11:15 crc kubenswrapper[4928]: E1122 00:11:15.227406 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227412 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227491 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227502 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="09021957-55db-401c-a705-d3d308653cc9" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227511 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="db624f52-ab96-42e2-a54c-62c2abc36274" containerName="marketplace-operator" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227519 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.227526 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="0caf4878-137a-405f-9173-3187372a5135" containerName="registry-server" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.228184 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.230966 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.242253 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86thh\" (UniqueName: \"kubernetes.io/projected/015c012b-0209-45c2-bf4b-09f2e274b810-kube-api-access-86thh\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.242301 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-utilities\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.242432 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-catalog-content\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.252734 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vftb7"] Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.343728 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-catalog-content\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.344097 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86thh\" (UniqueName: \"kubernetes.io/projected/015c012b-0209-45c2-bf4b-09f2e274b810-kube-api-access-86thh\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.344183 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-utilities\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.344403 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-catalog-content\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.344583 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-utilities\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.362482 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86thh\" (UniqueName: \"kubernetes.io/projected/015c012b-0209-45c2-bf4b-09f2e274b810-kube-api-access-86thh\") pod \"redhat-marketplace-vftb7\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.421455 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4s2zz"] Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.422588 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.428223 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.433514 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s2zz"] Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.445009 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-utilities\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.445093 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c98w\" (UniqueName: \"kubernetes.io/projected/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-kube-api-access-7c98w\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.445332 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-catalog-content\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.546078 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c98w\" (UniqueName: \"kubernetes.io/projected/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-kube-api-access-7c98w\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.546150 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-catalog-content\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.546177 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-utilities\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.546577 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-utilities\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.546607 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.546682 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-catalog-content\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.564844 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c98w\" (UniqueName: \"kubernetes.io/projected/f9f5d528-c852-4804-8ac3-9c3fd00cc1d3-kube-api-access-7c98w\") pod \"redhat-operators-4s2zz\" (UID: \"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3\") " pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.626427 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09021957-55db-401c-a705-d3d308653cc9" path="/var/lib/kubelet/pods/09021957-55db-401c-a705-d3d308653cc9/volumes" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.627269 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0caf4878-137a-405f-9173-3187372a5135" path="/var/lib/kubelet/pods/0caf4878-137a-405f-9173-3187372a5135/volumes" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.628123 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d779fd-e36a-44a9-93d2-266fc054b0f9" path="/var/lib/kubelet/pods/b3d779fd-e36a-44a9-93d2-266fc054b0f9/volumes" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.629518 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db624f52-ab96-42e2-a54c-62c2abc36274" path="/var/lib/kubelet/pods/db624f52-ab96-42e2-a54c-62c2abc36274/volumes" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.630151 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0703e2f-994d-4ba3-8461-1c90ae1a063d" path="/var/lib/kubelet/pods/f0703e2f-994d-4ba3-8461-1c90ae1a063d/volumes" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.729838 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vftb7"] Nov 22 00:11:15 crc kubenswrapper[4928]: W1122 00:11:15.730569 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod015c012b_0209_45c2_bf4b_09f2e274b810.slice/crio-e7ab809947e90e45385e0c39b022afdfd5c60944315162084445be2cabc2e761 WatchSource:0}: Error finding container e7ab809947e90e45385e0c39b022afdfd5c60944315162084445be2cabc2e761: Status 404 returned error can't find the container with id e7ab809947e90e45385e0c39b022afdfd5c60944315162084445be2cabc2e761 Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.737691 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:15 crc kubenswrapper[4928]: I1122 00:11:15.916926 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s2zz"] Nov 22 00:11:15 crc kubenswrapper[4928]: W1122 00:11:15.922649 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f5d528_c852_4804_8ac3_9c3fd00cc1d3.slice/crio-91dd2249104a2c26dabb223b961efa0ffaa4627846f1dfaf66f68f0506ea0fa3 WatchSource:0}: Error finding container 91dd2249104a2c26dabb223b961efa0ffaa4627846f1dfaf66f68f0506ea0fa3: Status 404 returned error can't find the container with id 91dd2249104a2c26dabb223b961efa0ffaa4627846f1dfaf66f68f0506ea0fa3 Nov 22 00:11:16 crc kubenswrapper[4928]: I1122 00:11:16.031670 4928 generic.go:334] "Generic (PLEG): container finished" podID="015c012b-0209-45c2-bf4b-09f2e274b810" containerID="d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d" exitCode=0 Nov 22 00:11:16 crc kubenswrapper[4928]: I1122 00:11:16.031729 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vftb7" event={"ID":"015c012b-0209-45c2-bf4b-09f2e274b810","Type":"ContainerDied","Data":"d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d"} Nov 22 00:11:16 crc kubenswrapper[4928]: I1122 00:11:16.031751 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vftb7" event={"ID":"015c012b-0209-45c2-bf4b-09f2e274b810","Type":"ContainerStarted","Data":"e7ab809947e90e45385e0c39b022afdfd5c60944315162084445be2cabc2e761"} Nov 22 00:11:16 crc kubenswrapper[4928]: I1122 00:11:16.033911 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s2zz" event={"ID":"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3","Type":"ContainerStarted","Data":"91dd2249104a2c26dabb223b961efa0ffaa4627846f1dfaf66f68f0506ea0fa3"} Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.039725 4928 generic.go:334] "Generic (PLEG): container finished" podID="f9f5d528-c852-4804-8ac3-9c3fd00cc1d3" containerID="a9976bb8d866cc051393111ccfd6cc8908fe75f0276bbd65af067754b1f949c1" exitCode=0 Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.039777 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s2zz" event={"ID":"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3","Type":"ContainerDied","Data":"a9976bb8d866cc051393111ccfd6cc8908fe75f0276bbd65af067754b1f949c1"} Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.634061 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5qv65"] Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.635194 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qv65"] Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.635275 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.637164 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.671257 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99661af0-b4f1-4678-b859-2e5f0a3af123-catalog-content\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.671302 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99661af0-b4f1-4678-b859-2e5f0a3af123-utilities\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.671345 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbfq8\" (UniqueName: \"kubernetes.io/projected/99661af0-b4f1-4678-b859-2e5f0a3af123-kube-api-access-cbfq8\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.772679 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbfq8\" (UniqueName: \"kubernetes.io/projected/99661af0-b4f1-4678-b859-2e5f0a3af123-kube-api-access-cbfq8\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.772777 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99661af0-b4f1-4678-b859-2e5f0a3af123-catalog-content\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.772828 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99661af0-b4f1-4678-b859-2e5f0a3af123-utilities\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.773171 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99661af0-b4f1-4678-b859-2e5f0a3af123-catalog-content\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.774678 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99661af0-b4f1-4678-b859-2e5f0a3af123-utilities\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.804933 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbfq8\" (UniqueName: \"kubernetes.io/projected/99661af0-b4f1-4678-b859-2e5f0a3af123-kube-api-access-cbfq8\") pod \"community-operators-5qv65\" (UID: \"99661af0-b4f1-4678-b859-2e5f0a3af123\") " pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.823741 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r97th"] Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.825781 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.828155 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.845928 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r97th"] Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.874034 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jzz\" (UniqueName: \"kubernetes.io/projected/e291904d-64aa-49d5-9cef-fddedca94cc9-kube-api-access-x9jzz\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.874141 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e291904d-64aa-49d5-9cef-fddedca94cc9-catalog-content\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.874172 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e291904d-64aa-49d5-9cef-fddedca94cc9-utilities\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.963609 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.975478 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e291904d-64aa-49d5-9cef-fddedca94cc9-catalog-content\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.975524 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e291904d-64aa-49d5-9cef-fddedca94cc9-utilities\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.975585 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jzz\" (UniqueName: \"kubernetes.io/projected/e291904d-64aa-49d5-9cef-fddedca94cc9-kube-api-access-x9jzz\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.976158 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e291904d-64aa-49d5-9cef-fddedca94cc9-utilities\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.976158 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e291904d-64aa-49d5-9cef-fddedca94cc9-catalog-content\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:17 crc kubenswrapper[4928]: I1122 00:11:17.993168 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jzz\" (UniqueName: \"kubernetes.io/projected/e291904d-64aa-49d5-9cef-fddedca94cc9-kube-api-access-x9jzz\") pod \"certified-operators-r97th\" (UID: \"e291904d-64aa-49d5-9cef-fddedca94cc9\") " pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:18 crc kubenswrapper[4928]: I1122 00:11:18.055998 4928 generic.go:334] "Generic (PLEG): container finished" podID="015c012b-0209-45c2-bf4b-09f2e274b810" containerID="5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038" exitCode=0 Nov 22 00:11:18 crc kubenswrapper[4928]: I1122 00:11:18.056087 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vftb7" event={"ID":"015c012b-0209-45c2-bf4b-09f2e274b810","Type":"ContainerDied","Data":"5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038"} Nov 22 00:11:18 crc kubenswrapper[4928]: I1122 00:11:18.154857 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:18 crc kubenswrapper[4928]: I1122 00:11:18.161520 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qv65"] Nov 22 00:11:18 crc kubenswrapper[4928]: W1122 00:11:18.169065 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99661af0_b4f1_4678_b859_2e5f0a3af123.slice/crio-9d0f7330c41703a95ba3c0afa77a88450aff1c2e870fe232711d239baf0cb77c WatchSource:0}: Error finding container 9d0f7330c41703a95ba3c0afa77a88450aff1c2e870fe232711d239baf0cb77c: Status 404 returned error can't find the container with id 9d0f7330c41703a95ba3c0afa77a88450aff1c2e870fe232711d239baf0cb77c Nov 22 00:11:18 crc kubenswrapper[4928]: I1122 00:11:18.336236 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r97th"] Nov 22 00:11:18 crc kubenswrapper[4928]: W1122 00:11:18.340930 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode291904d_64aa_49d5_9cef_fddedca94cc9.slice/crio-d97a64c1e8e151361e583cd9904d0f4532ab228ff9d8222980c328fe43246dca WatchSource:0}: Error finding container d97a64c1e8e151361e583cd9904d0f4532ab228ff9d8222980c328fe43246dca: Status 404 returned error can't find the container with id d97a64c1e8e151361e583cd9904d0f4532ab228ff9d8222980c328fe43246dca Nov 22 00:11:18 crc kubenswrapper[4928]: E1122 00:11:18.746294 4928 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f5d528_c852_4804_8ac3_9c3fd00cc1d3.slice/crio-conmon-4e34dd322e7d3b340f67c574eb65d7075c42408493b3ab5f8d4f7fd5055a0370.scope\": RecentStats: unable to find data in memory cache]" Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.062128 4928 generic.go:334] "Generic (PLEG): container finished" podID="e291904d-64aa-49d5-9cef-fddedca94cc9" containerID="6703eb1599ac6d421fc861018959bf24d981138f8acda5c5a62a22a8790e47a6" exitCode=0 Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.062173 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97th" event={"ID":"e291904d-64aa-49d5-9cef-fddedca94cc9","Type":"ContainerDied","Data":"6703eb1599ac6d421fc861018959bf24d981138f8acda5c5a62a22a8790e47a6"} Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.062209 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97th" event={"ID":"e291904d-64aa-49d5-9cef-fddedca94cc9","Type":"ContainerStarted","Data":"d97a64c1e8e151361e583cd9904d0f4532ab228ff9d8222980c328fe43246dca"} Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.064877 4928 generic.go:334] "Generic (PLEG): container finished" podID="f9f5d528-c852-4804-8ac3-9c3fd00cc1d3" containerID="4e34dd322e7d3b340f67c574eb65d7075c42408493b3ab5f8d4f7fd5055a0370" exitCode=0 Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.064907 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s2zz" event={"ID":"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3","Type":"ContainerDied","Data":"4e34dd322e7d3b340f67c574eb65d7075c42408493b3ab5f8d4f7fd5055a0370"} Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.071146 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vftb7" event={"ID":"015c012b-0209-45c2-bf4b-09f2e274b810","Type":"ContainerStarted","Data":"6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c"} Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.072848 4928 generic.go:334] "Generic (PLEG): container finished" podID="99661af0-b4f1-4678-b859-2e5f0a3af123" containerID="1e6ad9bf7fa1c6a2dedc9d548fdb817f3f1c8d22a7f199129ca3309b6b31410f" exitCode=0 Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.072928 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qv65" event={"ID":"99661af0-b4f1-4678-b859-2e5f0a3af123","Type":"ContainerDied","Data":"1e6ad9bf7fa1c6a2dedc9d548fdb817f3f1c8d22a7f199129ca3309b6b31410f"} Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.073078 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qv65" event={"ID":"99661af0-b4f1-4678-b859-2e5f0a3af123","Type":"ContainerStarted","Data":"9d0f7330c41703a95ba3c0afa77a88450aff1c2e870fe232711d239baf0cb77c"} Nov 22 00:11:19 crc kubenswrapper[4928]: I1122 00:11:19.135466 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vftb7" podStartSLOduration=1.326444554 podStartE2EDuration="4.135447251s" podCreationTimestamp="2025-11-22 00:11:15 +0000 UTC" firstStartedPulling="2025-11-22 00:11:16.037310927 +0000 UTC m=+311.325342118" lastFinishedPulling="2025-11-22 00:11:18.846313624 +0000 UTC m=+314.134344815" observedRunningTime="2025-11-22 00:11:19.135436201 +0000 UTC m=+314.423467392" watchObservedRunningTime="2025-11-22 00:11:19.135447251 +0000 UTC m=+314.423478452" Nov 22 00:11:20 crc kubenswrapper[4928]: I1122 00:11:20.079870 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s2zz" event={"ID":"f9f5d528-c852-4804-8ac3-9c3fd00cc1d3","Type":"ContainerStarted","Data":"4751b1ae699fecc769db6394afb8076a2e2daed0542b0f98d14db763a2d30181"} Nov 22 00:11:20 crc kubenswrapper[4928]: I1122 00:11:20.099965 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4s2zz" podStartSLOduration=2.3885298 podStartE2EDuration="5.09994864s" podCreationTimestamp="2025-11-22 00:11:15 +0000 UTC" firstStartedPulling="2025-11-22 00:11:17.041655321 +0000 UTC m=+312.329686502" lastFinishedPulling="2025-11-22 00:11:19.753074151 +0000 UTC m=+315.041105342" observedRunningTime="2025-11-22 00:11:20.09922805 +0000 UTC m=+315.387259242" watchObservedRunningTime="2025-11-22 00:11:20.09994864 +0000 UTC m=+315.387979821" Nov 22 00:11:21 crc kubenswrapper[4928]: I1122 00:11:21.086947 4928 generic.go:334] "Generic (PLEG): container finished" podID="e291904d-64aa-49d5-9cef-fddedca94cc9" containerID="96eb71892551c5c218d9d6328ff34ce804fc7dda043293f246a185093fc3e569" exitCode=0 Nov 22 00:11:21 crc kubenswrapper[4928]: I1122 00:11:21.087046 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97th" event={"ID":"e291904d-64aa-49d5-9cef-fddedca94cc9","Type":"ContainerDied","Data":"96eb71892551c5c218d9d6328ff34ce804fc7dda043293f246a185093fc3e569"} Nov 22 00:11:21 crc kubenswrapper[4928]: I1122 00:11:21.089591 4928 generic.go:334] "Generic (PLEG): container finished" podID="99661af0-b4f1-4678-b859-2e5f0a3af123" containerID="e1351dacb6568f00befbcbdd078be3ef91e1f47f0d3577c4e32c23114f893757" exitCode=0 Nov 22 00:11:21 crc kubenswrapper[4928]: I1122 00:11:21.089712 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qv65" event={"ID":"99661af0-b4f1-4678-b859-2e5f0a3af123","Type":"ContainerDied","Data":"e1351dacb6568f00befbcbdd078be3ef91e1f47f0d3577c4e32c23114f893757"} Nov 22 00:11:23 crc kubenswrapper[4928]: I1122 00:11:23.102148 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qv65" event={"ID":"99661af0-b4f1-4678-b859-2e5f0a3af123","Type":"ContainerStarted","Data":"e7409b076a69780d1d439f204cd4d9f4ddcf74900d789325e64cad179ac51bc0"} Nov 22 00:11:23 crc kubenswrapper[4928]: I1122 00:11:23.105401 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97th" event={"ID":"e291904d-64aa-49d5-9cef-fddedca94cc9","Type":"ContainerStarted","Data":"e68386bfc8d24dfd35d4efa6ac6852d3360b0386062da27b71c62af2e7c02e2c"} Nov 22 00:11:23 crc kubenswrapper[4928]: I1122 00:11:23.120478 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5qv65" podStartSLOduration=3.566063119 podStartE2EDuration="6.120461736s" podCreationTimestamp="2025-11-22 00:11:17 +0000 UTC" firstStartedPulling="2025-11-22 00:11:19.073960593 +0000 UTC m=+314.361991804" lastFinishedPulling="2025-11-22 00:11:21.62835923 +0000 UTC m=+316.916390421" observedRunningTime="2025-11-22 00:11:23.119429439 +0000 UTC m=+318.407460640" watchObservedRunningTime="2025-11-22 00:11:23.120461736 +0000 UTC m=+318.408492927" Nov 22 00:11:23 crc kubenswrapper[4928]: I1122 00:11:23.142027 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r97th" podStartSLOduration=3.249401306 podStartE2EDuration="6.142011316s" podCreationTimestamp="2025-11-22 00:11:17 +0000 UTC" firstStartedPulling="2025-11-22 00:11:19.063921712 +0000 UTC m=+314.351952903" lastFinishedPulling="2025-11-22 00:11:21.956531722 +0000 UTC m=+317.244562913" observedRunningTime="2025-11-22 00:11:23.139127872 +0000 UTC m=+318.427159063" watchObservedRunningTime="2025-11-22 00:11:23.142011316 +0000 UTC m=+318.430042517" Nov 22 00:11:25 crc kubenswrapper[4928]: I1122 00:11:25.547650 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:25 crc kubenswrapper[4928]: I1122 00:11:25.547979 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:25 crc kubenswrapper[4928]: I1122 00:11:25.586681 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:25 crc kubenswrapper[4928]: I1122 00:11:25.738982 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:25 crc kubenswrapper[4928]: I1122 00:11:25.739259 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:25 crc kubenswrapper[4928]: I1122 00:11:25.772924 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:26 crc kubenswrapper[4928]: I1122 00:11:26.163681 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4s2zz" Nov 22 00:11:26 crc kubenswrapper[4928]: I1122 00:11:26.165453 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:11:27 crc kubenswrapper[4928]: I1122 00:11:27.964753 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:27 crc kubenswrapper[4928]: I1122 00:11:27.965224 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:28 crc kubenswrapper[4928]: I1122 00:11:28.010971 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:28 crc kubenswrapper[4928]: I1122 00:11:28.156420 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:28 crc kubenswrapper[4928]: I1122 00:11:28.156487 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:28 crc kubenswrapper[4928]: I1122 00:11:28.175745 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5qv65" Nov 22 00:11:28 crc kubenswrapper[4928]: I1122 00:11:28.191515 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:11:29 crc kubenswrapper[4928]: I1122 00:11:29.176533 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r97th" Nov 22 00:12:06 crc kubenswrapper[4928]: I1122 00:12:06.707176 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:12:06 crc kubenswrapper[4928]: I1122 00:12:06.708034 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:12:36 crc kubenswrapper[4928]: I1122 00:12:36.707582 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:12:36 crc kubenswrapper[4928]: I1122 00:12:36.708208 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.117446 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hqkx7"] Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.118577 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.148326 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hqkx7"] Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.299778 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ebaef46-e8b5-41ca-80bd-d673a4622195-trusted-ca\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.300047 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ebaef46-e8b5-41ca-80bd-d673a4622195-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.300104 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.300139 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-bound-sa-token\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.300275 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ebaef46-e8b5-41ca-80bd-d673a4622195-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.300583 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-registry-tls\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.300660 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz6xs\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-kube-api-access-kz6xs\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.300682 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ebaef46-e8b5-41ca-80bd-d673a4622195-registry-certificates\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.321130 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.401374 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ebaef46-e8b5-41ca-80bd-d673a4622195-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.401425 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-registry-tls\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.401464 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6xs\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-kube-api-access-kz6xs\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.401484 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ebaef46-e8b5-41ca-80bd-d673a4622195-registry-certificates\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.401504 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ebaef46-e8b5-41ca-80bd-d673a4622195-trusted-ca\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.401531 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ebaef46-e8b5-41ca-80bd-d673a4622195-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.401555 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-bound-sa-token\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.402328 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ebaef46-e8b5-41ca-80bd-d673a4622195-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.402691 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ebaef46-e8b5-41ca-80bd-d673a4622195-registry-certificates\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.402950 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ebaef46-e8b5-41ca-80bd-d673a4622195-trusted-ca\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.414233 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ebaef46-e8b5-41ca-80bd-d673a4622195-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.414902 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-registry-tls\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.416705 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-bound-sa-token\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.416980 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz6xs\" (UniqueName: \"kubernetes.io/projected/9ebaef46-e8b5-41ca-80bd-d673a4622195-kube-api-access-kz6xs\") pod \"image-registry-66df7c8f76-hqkx7\" (UID: \"9ebaef46-e8b5-41ca-80bd-d673a4622195\") " pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.438131 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.624279 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hqkx7"] Nov 22 00:12:59 crc kubenswrapper[4928]: I1122 00:12:59.701259 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" event={"ID":"9ebaef46-e8b5-41ca-80bd-d673a4622195","Type":"ContainerStarted","Data":"a56139f8c29f04bd24fc3786ade4a9c351e9f522017b6b25e2d16c33a74f5994"} Nov 22 00:13:00 crc kubenswrapper[4928]: I1122 00:13:00.707472 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" event={"ID":"9ebaef46-e8b5-41ca-80bd-d673a4622195","Type":"ContainerStarted","Data":"d95e59e1dda660619898c5a5003f81cb4cc0b0999b074f6052c53194b76122ae"} Nov 22 00:13:00 crc kubenswrapper[4928]: I1122 00:13:00.707826 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:13:00 crc kubenswrapper[4928]: I1122 00:13:00.726893 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" podStartSLOduration=1.7268759889999998 podStartE2EDuration="1.726875989s" podCreationTimestamp="2025-11-22 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:13:00.722877567 +0000 UTC m=+416.010908758" watchObservedRunningTime="2025-11-22 00:13:00.726875989 +0000 UTC m=+416.014907180" Nov 22 00:13:06 crc kubenswrapper[4928]: I1122 00:13:06.706922 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:13:06 crc kubenswrapper[4928]: I1122 00:13:06.707768 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:13:06 crc kubenswrapper[4928]: I1122 00:13:06.707869 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:13:06 crc kubenswrapper[4928]: I1122 00:13:06.708686 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89b6744e44d745238ba09799ad0c4c13a9b38fe687a1383d25bf8855e3b2d8b7"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:13:06 crc kubenswrapper[4928]: I1122 00:13:06.708843 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://89b6744e44d745238ba09799ad0c4c13a9b38fe687a1383d25bf8855e3b2d8b7" gracePeriod=600 Nov 22 00:13:07 crc kubenswrapper[4928]: I1122 00:13:07.752194 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="89b6744e44d745238ba09799ad0c4c13a9b38fe687a1383d25bf8855e3b2d8b7" exitCode=0 Nov 22 00:13:07 crc kubenswrapper[4928]: I1122 00:13:07.752322 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"89b6744e44d745238ba09799ad0c4c13a9b38fe687a1383d25bf8855e3b2d8b7"} Nov 22 00:13:07 crc kubenswrapper[4928]: I1122 00:13:07.753193 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"90a2ab314eb110db4bd0a1773ada11cfb3889716012f40d2cc439313ab3736d0"} Nov 22 00:13:07 crc kubenswrapper[4928]: I1122 00:13:07.753225 4928 scope.go:117] "RemoveContainer" containerID="cf4923905a54d888d652ff50f477aa1487106dd4860c1c35b374fce196d89a70" Nov 22 00:13:19 crc kubenswrapper[4928]: I1122 00:13:19.446777 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hqkx7" Nov 22 00:13:19 crc kubenswrapper[4928]: I1122 00:13:19.517539 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc949"] Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.559205 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" podUID="670d4817-7813-4326-9eb3-d3319ecc7ae2" containerName="registry" containerID="cri-o://1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a" gracePeriod=30 Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.889548 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.919122 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/670d4817-7813-4326-9eb3-d3319ecc7ae2-installation-pull-secrets\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.919964 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p266w\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-kube-api-access-p266w\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.920072 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-bound-sa-token\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.920201 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-certificates\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.920237 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/670d4817-7813-4326-9eb3-d3319ecc7ae2-ca-trust-extracted\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.920262 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-tls\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.920290 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-trusted-ca\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.920448 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"670d4817-7813-4326-9eb3-d3319ecc7ae2\" (UID: \"670d4817-7813-4326-9eb3-d3319ecc7ae2\") " Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.921760 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.922108 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.931237 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.932142 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-kube-api-access-p266w" (OuterVolumeSpecName: "kube-api-access-p266w") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "kube-api-access-p266w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.932381 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.935476 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670d4817-7813-4326-9eb3-d3319ecc7ae2-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.940441 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670d4817-7813-4326-9eb3-d3319ecc7ae2-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.942627 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "670d4817-7813-4326-9eb3-d3319ecc7ae2" (UID: "670d4817-7813-4326-9eb3-d3319ecc7ae2"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.987506 4928 generic.go:334] "Generic (PLEG): container finished" podID="670d4817-7813-4326-9eb3-d3319ecc7ae2" containerID="1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a" exitCode=0 Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.987587 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" event={"ID":"670d4817-7813-4326-9eb3-d3319ecc7ae2","Type":"ContainerDied","Data":"1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a"} Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.987636 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" event={"ID":"670d4817-7813-4326-9eb3-d3319ecc7ae2","Type":"ContainerDied","Data":"6ee354e303059c877bba50a194a6cadacfaef67cd6774f4a4fbeff7453d18785"} Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.987667 4928 scope.go:117] "RemoveContainer" containerID="1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a" Nov 22 00:13:44 crc kubenswrapper[4928]: I1122 00:13:44.987900 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dc949" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.013115 4928 scope.go:117] "RemoveContainer" containerID="1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a" Nov 22 00:13:45 crc kubenswrapper[4928]: E1122 00:13:45.013722 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a\": container with ID starting with 1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a not found: ID does not exist" containerID="1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.013783 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a"} err="failed to get container status \"1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a\": rpc error: code = NotFound desc = could not find container \"1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a\": container with ID starting with 1b3a3c1e9d6934bb1afe29d258ee6ab70729addcf516321900f49e033201f37a not found: ID does not exist" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.024141 4928 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/670d4817-7813-4326-9eb3-d3319ecc7ae2-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.024181 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p266w\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-kube-api-access-p266w\") on node \"crc\" DevicePath \"\"" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.024287 4928 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.024302 4928 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.024311 4928 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/670d4817-7813-4326-9eb3-d3319ecc7ae2-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.024320 4928 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/670d4817-7813-4326-9eb3-d3319ecc7ae2-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.024330 4928 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670d4817-7813-4326-9eb3-d3319ecc7ae2-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.031761 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc949"] Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.034515 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dc949"] Nov 22 00:13:45 crc kubenswrapper[4928]: I1122 00:13:45.626026 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670d4817-7813-4326-9eb3-d3319ecc7ae2" path="/var/lib/kubelet/pods/670d4817-7813-4326-9eb3-d3319ecc7ae2/volumes" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.145756 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc"] Nov 22 00:15:00 crc kubenswrapper[4928]: E1122 00:15:00.146572 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670d4817-7813-4326-9eb3-d3319ecc7ae2" containerName="registry" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.146590 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="670d4817-7813-4326-9eb3-d3319ecc7ae2" containerName="registry" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.146712 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="670d4817-7813-4326-9eb3-d3319ecc7ae2" containerName="registry" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.147211 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.149601 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.150212 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.153456 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc"] Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.284443 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr7p6\" (UniqueName: \"kubernetes.io/projected/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-kube-api-access-tr7p6\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.284517 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-config-volume\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.284632 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-secret-volume\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.386442 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr7p6\" (UniqueName: \"kubernetes.io/projected/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-kube-api-access-tr7p6\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.386682 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-config-volume\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.386726 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-secret-volume\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.388467 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-config-volume\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.395513 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-secret-volume\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.406701 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr7p6\" (UniqueName: \"kubernetes.io/projected/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-kube-api-access-tr7p6\") pod \"collect-profiles-29396175-5kwkc\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.468022 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:00 crc kubenswrapper[4928]: I1122 00:15:00.621563 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc"] Nov 22 00:15:01 crc kubenswrapper[4928]: I1122 00:15:01.427167 4928 generic.go:334] "Generic (PLEG): container finished" podID="a6bdebad-0881-41ba-a0c6-c71a868fcf9b" containerID="d2fdbc47779f69da52f1097012f0696d27df39cd37bfe24e9f4b0e02e7f3d1f2" exitCode=0 Nov 22 00:15:01 crc kubenswrapper[4928]: I1122 00:15:01.427345 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" event={"ID":"a6bdebad-0881-41ba-a0c6-c71a868fcf9b","Type":"ContainerDied","Data":"d2fdbc47779f69da52f1097012f0696d27df39cd37bfe24e9f4b0e02e7f3d1f2"} Nov 22 00:15:01 crc kubenswrapper[4928]: I1122 00:15:01.427527 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" event={"ID":"a6bdebad-0881-41ba-a0c6-c71a868fcf9b","Type":"ContainerStarted","Data":"50958f6bd64e3743667dd9f56ffd16ce6a309937d506ef6b91d31f89782ed87b"} Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.667675 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.815719 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-config-volume\") pod \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.815930 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-secret-volume\") pod \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.816009 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr7p6\" (UniqueName: \"kubernetes.io/projected/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-kube-api-access-tr7p6\") pod \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\" (UID: \"a6bdebad-0881-41ba-a0c6-c71a868fcf9b\") " Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.816681 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6bdebad-0881-41ba-a0c6-c71a868fcf9b" (UID: "a6bdebad-0881-41ba-a0c6-c71a868fcf9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.823064 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6bdebad-0881-41ba-a0c6-c71a868fcf9b" (UID: "a6bdebad-0881-41ba-a0c6-c71a868fcf9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.823297 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-kube-api-access-tr7p6" (OuterVolumeSpecName: "kube-api-access-tr7p6") pod "a6bdebad-0881-41ba-a0c6-c71a868fcf9b" (UID: "a6bdebad-0881-41ba-a0c6-c71a868fcf9b"). InnerVolumeSpecName "kube-api-access-tr7p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.917331 4928 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.917395 4928 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 00:15:02 crc kubenswrapper[4928]: I1122 00:15:02.917405 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr7p6\" (UniqueName: \"kubernetes.io/projected/a6bdebad-0881-41ba-a0c6-c71a868fcf9b-kube-api-access-tr7p6\") on node \"crc\" DevicePath \"\"" Nov 22 00:15:03 crc kubenswrapper[4928]: I1122 00:15:03.442451 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" event={"ID":"a6bdebad-0881-41ba-a0c6-c71a868fcf9b","Type":"ContainerDied","Data":"50958f6bd64e3743667dd9f56ffd16ce6a309937d506ef6b91d31f89782ed87b"} Nov 22 00:15:03 crc kubenswrapper[4928]: I1122 00:15:03.442528 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50958f6bd64e3743667dd9f56ffd16ce6a309937d506ef6b91d31f89782ed87b" Nov 22 00:15:03 crc kubenswrapper[4928]: I1122 00:15:03.442577 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396175-5kwkc" Nov 22 00:15:36 crc kubenswrapper[4928]: I1122 00:15:36.707098 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:15:36 crc kubenswrapper[4928]: I1122 00:15:36.708061 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:16:06 crc kubenswrapper[4928]: I1122 00:16:06.707472 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:16:06 crc kubenswrapper[4928]: I1122 00:16:06.708126 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:16:36 crc kubenswrapper[4928]: I1122 00:16:36.707702 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:16:36 crc kubenswrapper[4928]: I1122 00:16:36.708375 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:16:36 crc kubenswrapper[4928]: I1122 00:16:36.708425 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:16:36 crc kubenswrapper[4928]: I1122 00:16:36.709024 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90a2ab314eb110db4bd0a1773ada11cfb3889716012f40d2cc439313ab3736d0"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:16:36 crc kubenswrapper[4928]: I1122 00:16:36.709095 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://90a2ab314eb110db4bd0a1773ada11cfb3889716012f40d2cc439313ab3736d0" gracePeriod=600 Nov 22 00:16:37 crc kubenswrapper[4928]: I1122 00:16:37.382648 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="90a2ab314eb110db4bd0a1773ada11cfb3889716012f40d2cc439313ab3736d0" exitCode=0 Nov 22 00:16:37 crc kubenswrapper[4928]: I1122 00:16:37.382935 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"90a2ab314eb110db4bd0a1773ada11cfb3889716012f40d2cc439313ab3736d0"} Nov 22 00:16:37 crc kubenswrapper[4928]: I1122 00:16:37.382965 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"32211f58683787fb757bd9a4fc769db246d8d45c2afe49257b9494ed6fed87fd"} Nov 22 00:16:37 crc kubenswrapper[4928]: I1122 00:16:37.382983 4928 scope.go:117] "RemoveContainer" containerID="89b6744e44d745238ba09799ad0c4c13a9b38fe687a1383d25bf8855e3b2d8b7" Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.329931 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nchmx"] Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.331132 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-controller" containerID="cri-o://c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" gracePeriod=30 Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.331728 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="sbdb" containerID="cri-o://4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" gracePeriod=30 Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.331862 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="nbdb" containerID="cri-o://065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" gracePeriod=30 Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.331949 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="northd" containerID="cri-o://40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" gracePeriod=30 Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.332071 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" gracePeriod=30 Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.332165 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-node" containerID="cri-o://0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" gracePeriod=30 Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.332260 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-acl-logging" containerID="cri-o://9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" gracePeriod=30 Nov 22 00:16:51 crc kubenswrapper[4928]: I1122 00:16:51.374839 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" containerID="cri-o://8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" gracePeriod=30 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.099078 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/3.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.100943 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovn-acl-logging/0.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.101501 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovn-controller/0.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.101914 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.154256 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cfwz9"] Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.155963 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.155991 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156003 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="northd" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156012 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="northd" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156019 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="nbdb" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156027 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="nbdb" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156035 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156042 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156053 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156060 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156073 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156080 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156088 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kubecfg-setup" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156095 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kubecfg-setup" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156107 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-acl-logging" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156132 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-acl-logging" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156140 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="sbdb" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156147 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="sbdb" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156161 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-node" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156168 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-node" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156180 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bdebad-0881-41ba-a0c6-c71a868fcf9b" containerName="collect-profiles" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156187 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bdebad-0881-41ba-a0c6-c71a868fcf9b" containerName="collect-profiles" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.156196 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156203 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156326 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156341 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156350 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="sbdb" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156359 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156367 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="nbdb" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156378 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="northd" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156387 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156966 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bdebad-0881-41ba-a0c6-c71a868fcf9b" containerName="collect-profiles" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.156986 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-node" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.157016 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovn-acl-logging" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.157023 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.157107 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.157115 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.157125 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.157131 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.157214 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.157223 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" containerName="ovnkube-controller" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.159124 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.218692 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31460a9d-62d5-4dae-bb57-4db45544352d-ovn-node-metrics-cert\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.218783 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-netd\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.218882 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-env-overrides\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.218937 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-config\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.218960 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-var-lib-openvswitch\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.218981 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-bin\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219003 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-kubelet\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219043 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-netns\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219027 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219072 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219112 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219066 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-script-lib\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219144 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219166 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-systemd-units\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219159 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219192 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-systemd\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219232 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219274 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-etc-openvswitch\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219310 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-ovn-kubernetes\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219336 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74sb5\" (UniqueName: \"kubernetes.io/projected/31460a9d-62d5-4dae-bb57-4db45544352d-kube-api-access-74sb5\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219355 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-slash\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219411 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-ovn\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219401 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219450 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-node-log" (OuterVolumeSpecName: "node-log") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219433 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219433 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-node-log\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219474 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219482 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-slash" (OuterVolumeSpecName: "host-slash") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219538 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219562 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-openvswitch\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219597 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-log-socket\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219638 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"31460a9d-62d5-4dae-bb57-4db45544352d\" (UID: \"31460a9d-62d5-4dae-bb57-4db45544352d\") " Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219641 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219666 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-log-socket" (OuterVolumeSpecName: "log-socket") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219751 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.219755 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220153 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-run-ovn-kubernetes\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220202 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-ovnkube-config\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220203 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220313 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-var-lib-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220357 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-env-overrides\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220382 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-systemd\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220465 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-cni-bin\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220492 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-cni-netd\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220539 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-systemd-units\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220556 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-etc-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220642 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnj9\" (UniqueName: \"kubernetes.io/projected/d8372502-459d-4f94-8cbf-65684cf7a49c-kube-api-access-sxnj9\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220715 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-ovn\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220764 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-kubelet\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220805 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-slash\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220822 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-node-log\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220849 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-log-socket\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220894 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-ovnkube-script-lib\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220930 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.220959 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8372502-459d-4f94-8cbf-65684cf7a49c-ovn-node-metrics-cert\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221019 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221063 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-run-netns\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221200 4928 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221214 4928 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221223 4928 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221232 4928 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221242 4928 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221251 4928 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221259 4928 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-slash\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221269 4928 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221278 4928 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-node-log\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221288 4928 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221297 4928 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-log-socket\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221305 4928 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221315 4928 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221323 4928 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221333 4928 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/31460a9d-62d5-4dae-bb57-4db45544352d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221341 4928 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.221349 4928 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.226516 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31460a9d-62d5-4dae-bb57-4db45544352d-kube-api-access-74sb5" (OuterVolumeSpecName: "kube-api-access-74sb5") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "kube-api-access-74sb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.227022 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31460a9d-62d5-4dae-bb57-4db45544352d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.237149 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "31460a9d-62d5-4dae-bb57-4db45544352d" (UID: "31460a9d-62d5-4dae-bb57-4db45544352d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326614 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326692 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-run-netns\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326775 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-run-ovn-kubernetes\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326834 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-run-netns\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326859 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-ovnkube-config\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326899 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-run-ovn-kubernetes\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326903 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-var-lib-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326950 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-env-overrides\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326969 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-var-lib-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326992 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-systemd\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.326775 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327086 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-cni-bin\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327110 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-cni-netd\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327150 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-systemd-units\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327175 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-etc-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327223 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnj9\" (UniqueName: \"kubernetes.io/projected/d8372502-459d-4f94-8cbf-65684cf7a49c-kube-api-access-sxnj9\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327275 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-ovn\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327279 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-cni-bin\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327334 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-kubelet\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327365 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-slash\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327370 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-systemd\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327386 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-node-log\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327404 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-log-socket\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327426 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-ovnkube-script-lib\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327452 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327479 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8372502-459d-4f94-8cbf-65684cf7a49c-ovn-node-metrics-cert\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327559 4928 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/31460a9d-62d5-4dae-bb57-4db45544352d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327577 4928 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/31460a9d-62d5-4dae-bb57-4db45544352d-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327588 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74sb5\" (UniqueName: \"kubernetes.io/projected/31460a9d-62d5-4dae-bb57-4db45544352d-kube-api-access-74sb5\") on node \"crc\" DevicePath \"\"" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327757 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-cni-netd\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327844 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-systemd-units\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327889 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-ovn\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327926 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-node-log\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327982 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-log-socket\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.328037 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-kubelet\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.328084 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-host-slash\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.327892 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-etc-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.328169 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d8372502-459d-4f94-8cbf-65684cf7a49c-run-openvswitch\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.328756 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-ovnkube-script-lib\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.328975 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-env-overrides\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.329165 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d8372502-459d-4f94-8cbf-65684cf7a49c-ovnkube-config\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.330881 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8372502-459d-4f94-8cbf-65684cf7a49c-ovn-node-metrics-cert\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.348381 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnj9\" (UniqueName: \"kubernetes.io/projected/d8372502-459d-4f94-8cbf-65684cf7a49c-kube-api-access-sxnj9\") pod \"ovnkube-node-cfwz9\" (UID: \"d8372502-459d-4f94-8cbf-65684cf7a49c\") " pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.468939 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovnkube-controller/3.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.471270 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovn-acl-logging/0.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472029 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nchmx_31460a9d-62d5-4dae-bb57-4db45544352d/ovn-controller/0.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472540 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" exitCode=0 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472568 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" exitCode=0 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472579 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" exitCode=0 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472588 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" exitCode=0 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472596 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" exitCode=0 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472606 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" exitCode=0 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472615 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" exitCode=143 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472623 4928 generic.go:334] "Generic (PLEG): container finished" podID="31460a9d-62d5-4dae-bb57-4db45544352d" containerID="c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" exitCode=143 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472699 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472693 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472788 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472863 4928 scope.go:117] "RemoveContainer" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472873 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472904 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.472933 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473002 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473033 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473097 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473905 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473945 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473957 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473967 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473978 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473988 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.473998 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474024 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474053 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474067 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474078 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474088 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474098 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474108 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474117 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474129 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474139 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474148 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474162 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474180 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474220 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474232 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474243 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474348 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474363 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474373 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474383 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474392 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474673 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474694 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nchmx" event={"ID":"31460a9d-62d5-4dae-bb57-4db45544352d","Type":"ContainerDied","Data":"93763d03266f4b3e359e3786cdcc3aceb4bc652751659120ad138f35c8791243"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474761 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474776 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.474786 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475369 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475316 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/2.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475407 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475420 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475431 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475440 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475449 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475459 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475891 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/1.log" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475929 4928 generic.go:334] "Generic (PLEG): container finished" podID="c474173e-56bf-467c-84e6-e473d89563c4" containerID="64911518eaed9b948c61c7dbf99b4937d5bb5d2dc74889c3e69c591502f950ed" exitCode=2 Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475955 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerDied","Data":"64911518eaed9b948c61c7dbf99b4937d5bb5d2dc74889c3e69c591502f950ed"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.475975 4928 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f"} Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.476464 4928 scope.go:117] "RemoveContainer" containerID="64911518eaed9b948c61c7dbf99b4937d5bb5d2dc74889c3e69c591502f950ed" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.476940 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xmk2r_openshift-multus(c474173e-56bf-467c-84e6-e473d89563c4)\"" pod="openshift-multus/multus-xmk2r" podUID="c474173e-56bf-467c-84e6-e473d89563c4" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.479642 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.515836 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nchmx"] Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.516950 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.519046 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nchmx"] Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.563661 4928 scope.go:117] "RemoveContainer" containerID="4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.582214 4928 scope.go:117] "RemoveContainer" containerID="065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.594592 4928 scope.go:117] "RemoveContainer" containerID="40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.607237 4928 scope.go:117] "RemoveContainer" containerID="05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.618408 4928 scope.go:117] "RemoveContainer" containerID="0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.632813 4928 scope.go:117] "RemoveContainer" containerID="9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.644972 4928 scope.go:117] "RemoveContainer" containerID="c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.658009 4928 scope.go:117] "RemoveContainer" containerID="5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.669089 4928 scope.go:117] "RemoveContainer" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.669410 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": container with ID starting with 8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081 not found: ID does not exist" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.669455 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} err="failed to get container status \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": rpc error: code = NotFound desc = could not find container \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": container with ID starting with 8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.669484 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.669756 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": container with ID starting with de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10 not found: ID does not exist" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.669813 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} err="failed to get container status \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": rpc error: code = NotFound desc = could not find container \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": container with ID starting with de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.669848 4928 scope.go:117] "RemoveContainer" containerID="4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.670068 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": container with ID starting with 4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001 not found: ID does not exist" containerID="4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670092 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} err="failed to get container status \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": rpc error: code = NotFound desc = could not find container \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": container with ID starting with 4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670106 4928 scope.go:117] "RemoveContainer" containerID="065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.670350 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": container with ID starting with 065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f not found: ID does not exist" containerID="065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670379 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} err="failed to get container status \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": rpc error: code = NotFound desc = could not find container \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": container with ID starting with 065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670395 4928 scope.go:117] "RemoveContainer" containerID="40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.670579 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": container with ID starting with 40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185 not found: ID does not exist" containerID="40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670605 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} err="failed to get container status \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": rpc error: code = NotFound desc = could not find container \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": container with ID starting with 40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670617 4928 scope.go:117] "RemoveContainer" containerID="05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.670849 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": container with ID starting with 05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2 not found: ID does not exist" containerID="05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670873 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} err="failed to get container status \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": rpc error: code = NotFound desc = could not find container \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": container with ID starting with 05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.670887 4928 scope.go:117] "RemoveContainer" containerID="0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.671115 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": container with ID starting with 0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf not found: ID does not exist" containerID="0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.671136 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} err="failed to get container status \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": rpc error: code = NotFound desc = could not find container \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": container with ID starting with 0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.671147 4928 scope.go:117] "RemoveContainer" containerID="9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.671314 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": container with ID starting with 9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e not found: ID does not exist" containerID="9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.671333 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} err="failed to get container status \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": rpc error: code = NotFound desc = could not find container \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": container with ID starting with 9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.671345 4928 scope.go:117] "RemoveContainer" containerID="c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.671686 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": container with ID starting with c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9 not found: ID does not exist" containerID="c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.671727 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} err="failed to get container status \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": rpc error: code = NotFound desc = could not find container \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": container with ID starting with c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.671741 4928 scope.go:117] "RemoveContainer" containerID="5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0" Nov 22 00:16:52 crc kubenswrapper[4928]: E1122 00:16:52.671979 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": container with ID starting with 5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0 not found: ID does not exist" containerID="5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672004 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} err="failed to get container status \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": rpc error: code = NotFound desc = could not find container \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": container with ID starting with 5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672020 4928 scope.go:117] "RemoveContainer" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672268 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} err="failed to get container status \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": rpc error: code = NotFound desc = could not find container \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": container with ID starting with 8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672290 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672484 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} err="failed to get container status \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": rpc error: code = NotFound desc = could not find container \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": container with ID starting with de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672504 4928 scope.go:117] "RemoveContainer" containerID="4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672696 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} err="failed to get container status \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": rpc error: code = NotFound desc = could not find container \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": container with ID starting with 4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672728 4928 scope.go:117] "RemoveContainer" containerID="065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672927 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} err="failed to get container status \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": rpc error: code = NotFound desc = could not find container \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": container with ID starting with 065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.672946 4928 scope.go:117] "RemoveContainer" containerID="40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673098 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} err="failed to get container status \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": rpc error: code = NotFound desc = could not find container \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": container with ID starting with 40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673115 4928 scope.go:117] "RemoveContainer" containerID="05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673279 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} err="failed to get container status \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": rpc error: code = NotFound desc = could not find container \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": container with ID starting with 05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673297 4928 scope.go:117] "RemoveContainer" containerID="0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673473 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} err="failed to get container status \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": rpc error: code = NotFound desc = could not find container \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": container with ID starting with 0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673495 4928 scope.go:117] "RemoveContainer" containerID="9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673692 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} err="failed to get container status \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": rpc error: code = NotFound desc = could not find container \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": container with ID starting with 9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.673712 4928 scope.go:117] "RemoveContainer" containerID="c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674071 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} err="failed to get container status \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": rpc error: code = NotFound desc = could not find container \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": container with ID starting with c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674098 4928 scope.go:117] "RemoveContainer" containerID="5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674313 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} err="failed to get container status \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": rpc error: code = NotFound desc = could not find container \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": container with ID starting with 5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674334 4928 scope.go:117] "RemoveContainer" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674531 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} err="failed to get container status \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": rpc error: code = NotFound desc = could not find container \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": container with ID starting with 8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674555 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674716 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} err="failed to get container status \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": rpc error: code = NotFound desc = could not find container \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": container with ID starting with de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674735 4928 scope.go:117] "RemoveContainer" containerID="4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674911 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} err="failed to get container status \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": rpc error: code = NotFound desc = could not find container \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": container with ID starting with 4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.674931 4928 scope.go:117] "RemoveContainer" containerID="065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675111 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} err="failed to get container status \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": rpc error: code = NotFound desc = could not find container \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": container with ID starting with 065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675130 4928 scope.go:117] "RemoveContainer" containerID="40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675341 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} err="failed to get container status \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": rpc error: code = NotFound desc = could not find container \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": container with ID starting with 40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675362 4928 scope.go:117] "RemoveContainer" containerID="05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675556 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} err="failed to get container status \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": rpc error: code = NotFound desc = could not find container \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": container with ID starting with 05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675574 4928 scope.go:117] "RemoveContainer" containerID="0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675803 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} err="failed to get container status \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": rpc error: code = NotFound desc = could not find container \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": container with ID starting with 0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.675821 4928 scope.go:117] "RemoveContainer" containerID="9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676037 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} err="failed to get container status \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": rpc error: code = NotFound desc = could not find container \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": container with ID starting with 9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676055 4928 scope.go:117] "RemoveContainer" containerID="c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676270 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} err="failed to get container status \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": rpc error: code = NotFound desc = could not find container \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": container with ID starting with c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676293 4928 scope.go:117] "RemoveContainer" containerID="5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676568 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} err="failed to get container status \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": rpc error: code = NotFound desc = could not find container \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": container with ID starting with 5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676587 4928 scope.go:117] "RemoveContainer" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676824 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} err="failed to get container status \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": rpc error: code = NotFound desc = could not find container \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": container with ID starting with 8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.676843 4928 scope.go:117] "RemoveContainer" containerID="de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.677034 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10"} err="failed to get container status \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": rpc error: code = NotFound desc = could not find container \"de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10\": container with ID starting with de20be5b9483aaf5f6fab3afaeace7ede669257d6ff58fa5562bf171139dfe10 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.677054 4928 scope.go:117] "RemoveContainer" containerID="4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.677606 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001"} err="failed to get container status \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": rpc error: code = NotFound desc = could not find container \"4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001\": container with ID starting with 4bcc5fe9f557578d21086a9421ce10bea26e20bdd540bb96857ee4b4d35ab001 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.677626 4928 scope.go:117] "RemoveContainer" containerID="065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.677955 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f"} err="failed to get container status \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": rpc error: code = NotFound desc = could not find container \"065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f\": container with ID starting with 065918d1ecf91fb592aa418ffab4029b0749fa35fc32ea11cf4b1ca4a3dae80f not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.677979 4928 scope.go:117] "RemoveContainer" containerID="40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.678339 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185"} err="failed to get container status \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": rpc error: code = NotFound desc = could not find container \"40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185\": container with ID starting with 40c37cdf2ad374d74e3fe16e7087c7fe19f5c096e333ff2553d89d6515301185 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.678364 4928 scope.go:117] "RemoveContainer" containerID="05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.678581 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2"} err="failed to get container status \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": rpc error: code = NotFound desc = could not find container \"05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2\": container with ID starting with 05163524e3a0d593b6018a2c29ddb3cc51495b64d34fa6dab0f6a9803356a7c2 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.678602 4928 scope.go:117] "RemoveContainer" containerID="0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.678874 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf"} err="failed to get container status \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": rpc error: code = NotFound desc = could not find container \"0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf\": container with ID starting with 0132975ca03a0e6b6a25afdc9ba7be52ff977efa099b30960117a8887e7aa4bf not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.678896 4928 scope.go:117] "RemoveContainer" containerID="9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.679227 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e"} err="failed to get container status \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": rpc error: code = NotFound desc = could not find container \"9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e\": container with ID starting with 9b4ffac738875ee2f1e2c3921b87897d23c5f07141148a9d7292c40f8d65697e not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.679252 4928 scope.go:117] "RemoveContainer" containerID="c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.679507 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9"} err="failed to get container status \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": rpc error: code = NotFound desc = could not find container \"c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9\": container with ID starting with c2641e3487be155791358f24d72c0f5b472a349b4c124bd6f63b441100fff4e9 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.679528 4928 scope.go:117] "RemoveContainer" containerID="5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.679752 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0"} err="failed to get container status \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": rpc error: code = NotFound desc = could not find container \"5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0\": container with ID starting with 5d753c0ff230806d376dea2120b39fdaadaa002c32820b1458e1db3d9307b4f0 not found: ID does not exist" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.679775 4928 scope.go:117] "RemoveContainer" containerID="8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081" Nov 22 00:16:52 crc kubenswrapper[4928]: I1122 00:16:52.680083 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081"} err="failed to get container status \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": rpc error: code = NotFound desc = could not find container \"8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081\": container with ID starting with 8cb36a68d02d9e802a8b10882de1a05ec978213e3037f82261ce0e8b3965b081 not found: ID does not exist" Nov 22 00:16:53 crc kubenswrapper[4928]: I1122 00:16:53.484703 4928 generic.go:334] "Generic (PLEG): container finished" podID="d8372502-459d-4f94-8cbf-65684cf7a49c" containerID="77d600dbcf67d75819ff6317a1b88ef463b648909387543727abedd6ce58e4f8" exitCode=0 Nov 22 00:16:53 crc kubenswrapper[4928]: I1122 00:16:53.485172 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerDied","Data":"77d600dbcf67d75819ff6317a1b88ef463b648909387543727abedd6ce58e4f8"} Nov 22 00:16:53 crc kubenswrapper[4928]: I1122 00:16:53.485216 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"d01d4545331b180a2ccdd835ffb383541ac2a8345ddab2944d8b557752a16d1f"} Nov 22 00:16:53 crc kubenswrapper[4928]: I1122 00:16:53.626125 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31460a9d-62d5-4dae-bb57-4db45544352d" path="/var/lib/kubelet/pods/31460a9d-62d5-4dae-bb57-4db45544352d/volumes" Nov 22 00:16:54 crc kubenswrapper[4928]: I1122 00:16:54.499276 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"dbb6328a77a943892104b23bcbbd5533048227a5e52239900175595903f644ab"} Nov 22 00:16:54 crc kubenswrapper[4928]: I1122 00:16:54.499630 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"00ef9d09c7a799aeb55cb7ff0f736e34ab32dec1cb40663fc741ce02a60c4dcb"} Nov 22 00:16:54 crc kubenswrapper[4928]: I1122 00:16:54.499643 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"4ffe014fe72f3f385af723ed3b53c032cb3473300e633d838d6492dbf004da92"} Nov 22 00:16:54 crc kubenswrapper[4928]: I1122 00:16:54.499652 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"dcf850f8afd7ebead954d233e1dd8c611606601359881cec43e9d75ecd6b4447"} Nov 22 00:16:54 crc kubenswrapper[4928]: I1122 00:16:54.499661 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"9a84e357a7801d9ce9973b52272dfa5f230b94a4abcd0858101c57c9bdd09eea"} Nov 22 00:16:54 crc kubenswrapper[4928]: I1122 00:16:54.499670 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"1a0a572d7009e71694296bdb6614f04bf4bb67942abb1f70150b77eb1cfe5075"} Nov 22 00:16:57 crc kubenswrapper[4928]: I1122 00:16:57.518028 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"d3df0faad2d671b2af36fb71279ab1e7240655f5af8234a783f77d73f6ca87f3"} Nov 22 00:16:59 crc kubenswrapper[4928]: I1122 00:16:59.532458 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" event={"ID":"d8372502-459d-4f94-8cbf-65684cf7a49c","Type":"ContainerStarted","Data":"43ea53ff676c245daced2e0b4affad8772be7552077562c2dabe93cba5376e90"} Nov 22 00:16:59 crc kubenswrapper[4928]: I1122 00:16:59.533043 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:59 crc kubenswrapper[4928]: I1122 00:16:59.533056 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:59 crc kubenswrapper[4928]: I1122 00:16:59.533067 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:59 crc kubenswrapper[4928]: I1122 00:16:59.559905 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:59 crc kubenswrapper[4928]: I1122 00:16:59.560808 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:16:59 crc kubenswrapper[4928]: I1122 00:16:59.562661 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" podStartSLOduration=7.5626456399999995 podStartE2EDuration="7.56264564s" podCreationTimestamp="2025-11-22 00:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:16:59.557327746 +0000 UTC m=+654.845358937" watchObservedRunningTime="2025-11-22 00:16:59.56264564 +0000 UTC m=+654.850676831" Nov 22 00:17:03 crc kubenswrapper[4928]: I1122 00:17:03.618837 4928 scope.go:117] "RemoveContainer" containerID="64911518eaed9b948c61c7dbf99b4937d5bb5d2dc74889c3e69c591502f950ed" Nov 22 00:17:03 crc kubenswrapper[4928]: E1122 00:17:03.619376 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xmk2r_openshift-multus(c474173e-56bf-467c-84e6-e473d89563c4)\"" pod="openshift-multus/multus-xmk2r" podUID="c474173e-56bf-467c-84e6-e473d89563c4" Nov 22 00:17:15 crc kubenswrapper[4928]: I1122 00:17:15.620776 4928 scope.go:117] "RemoveContainer" containerID="64911518eaed9b948c61c7dbf99b4937d5bb5d2dc74889c3e69c591502f950ed" Nov 22 00:17:16 crc kubenswrapper[4928]: I1122 00:17:16.114225 4928 scope.go:117] "RemoveContainer" containerID="7e8a6e52face946c0a61439452b4d078ca11a0fac66d2e687e86442e4b4acf7f" Nov 22 00:17:16 crc kubenswrapper[4928]: I1122 00:17:16.620096 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xmk2r_c474173e-56bf-467c-84e6-e473d89563c4/kube-multus/2.log" Nov 22 00:17:16 crc kubenswrapper[4928]: I1122 00:17:16.620737 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xmk2r" event={"ID":"c474173e-56bf-467c-84e6-e473d89563c4","Type":"ContainerStarted","Data":"455ef2e2ed74ba8a5e1f2250fbf9945875304b5caaa1209d4297cbf5fb9d5097"} Nov 22 00:17:22 crc kubenswrapper[4928]: I1122 00:17:22.501649 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cfwz9" Nov 22 00:18:03 crc kubenswrapper[4928]: I1122 00:18:03.918977 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vftb7"] Nov 22 00:18:03 crc kubenswrapper[4928]: I1122 00:18:03.920983 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vftb7" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="registry-server" containerID="cri-o://6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c" gracePeriod=30 Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.220335 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.248410 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-catalog-content\") pod \"015c012b-0209-45c2-bf4b-09f2e274b810\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.248460 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-utilities\") pod \"015c012b-0209-45c2-bf4b-09f2e274b810\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.248526 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86thh\" (UniqueName: \"kubernetes.io/projected/015c012b-0209-45c2-bf4b-09f2e274b810-kube-api-access-86thh\") pod \"015c012b-0209-45c2-bf4b-09f2e274b810\" (UID: \"015c012b-0209-45c2-bf4b-09f2e274b810\") " Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.249596 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-utilities" (OuterVolumeSpecName: "utilities") pod "015c012b-0209-45c2-bf4b-09f2e274b810" (UID: "015c012b-0209-45c2-bf4b-09f2e274b810"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.255164 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015c012b-0209-45c2-bf4b-09f2e274b810-kube-api-access-86thh" (OuterVolumeSpecName: "kube-api-access-86thh") pod "015c012b-0209-45c2-bf4b-09f2e274b810" (UID: "015c012b-0209-45c2-bf4b-09f2e274b810"). InnerVolumeSpecName "kube-api-access-86thh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.266214 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "015c012b-0209-45c2-bf4b-09f2e274b810" (UID: "015c012b-0209-45c2-bf4b-09f2e274b810"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.350212 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.350252 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015c012b-0209-45c2-bf4b-09f2e274b810-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.350264 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86thh\" (UniqueName: \"kubernetes.io/projected/015c012b-0209-45c2-bf4b-09f2e274b810-kube-api-access-86thh\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.864713 4928 generic.go:334] "Generic (PLEG): container finished" podID="015c012b-0209-45c2-bf4b-09f2e274b810" containerID="6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c" exitCode=0 Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.864760 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vftb7" event={"ID":"015c012b-0209-45c2-bf4b-09f2e274b810","Type":"ContainerDied","Data":"6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c"} Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.864813 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vftb7" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.864832 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vftb7" event={"ID":"015c012b-0209-45c2-bf4b-09f2e274b810","Type":"ContainerDied","Data":"e7ab809947e90e45385e0c39b022afdfd5c60944315162084445be2cabc2e761"} Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.864854 4928 scope.go:117] "RemoveContainer" containerID="6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.884256 4928 scope.go:117] "RemoveContainer" containerID="5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.892808 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vftb7"] Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.897183 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vftb7"] Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.911302 4928 scope.go:117] "RemoveContainer" containerID="d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.926970 4928 scope.go:117] "RemoveContainer" containerID="6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c" Nov 22 00:18:04 crc kubenswrapper[4928]: E1122 00:18:04.927430 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c\": container with ID starting with 6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c not found: ID does not exist" containerID="6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.927482 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c"} err="failed to get container status \"6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c\": rpc error: code = NotFound desc = could not find container \"6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c\": container with ID starting with 6f7711bcf70f8dbadcfbb0362788f5cac7838f01d6fdd66daeaeae85e519407c not found: ID does not exist" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.927511 4928 scope.go:117] "RemoveContainer" containerID="5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038" Nov 22 00:18:04 crc kubenswrapper[4928]: E1122 00:18:04.927996 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038\": container with ID starting with 5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038 not found: ID does not exist" containerID="5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.928040 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038"} err="failed to get container status \"5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038\": rpc error: code = NotFound desc = could not find container \"5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038\": container with ID starting with 5ed0f90f6956248c0f661bfaa867147f8ffb5c1dd883a340170d76fcf5cd8038 not found: ID does not exist" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.928078 4928 scope.go:117] "RemoveContainer" containerID="d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d" Nov 22 00:18:04 crc kubenswrapper[4928]: E1122 00:18:04.928420 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d\": container with ID starting with d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d not found: ID does not exist" containerID="d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d" Nov 22 00:18:04 crc kubenswrapper[4928]: I1122 00:18:04.928447 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d"} err="failed to get container status \"d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d\": rpc error: code = NotFound desc = could not find container \"d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d\": container with ID starting with d012c0ece327fb5218458a88c26446c275d4846ce7b313ce7e099ccf4772db5d not found: ID does not exist" Nov 22 00:18:05 crc kubenswrapper[4928]: I1122 00:18:05.623741 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" path="/var/lib/kubelet/pods/015c012b-0209-45c2-bf4b-09f2e274b810/volumes" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.364613 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv"] Nov 22 00:18:07 crc kubenswrapper[4928]: E1122 00:18:07.365023 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="registry-server" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.365037 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="registry-server" Nov 22 00:18:07 crc kubenswrapper[4928]: E1122 00:18:07.365050 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="extract-utilities" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.365057 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="extract-utilities" Nov 22 00:18:07 crc kubenswrapper[4928]: E1122 00:18:07.365070 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="extract-content" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.365077 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="extract-content" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.365159 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="015c012b-0209-45c2-bf4b-09f2e274b810" containerName="registry-server" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.365822 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.370257 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.377982 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv"] Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.387276 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c76x6\" (UniqueName: \"kubernetes.io/projected/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-kube-api-access-c76x6\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.387391 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.387447 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.488221 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.488278 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.488347 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c76x6\" (UniqueName: \"kubernetes.io/projected/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-kube-api-access-c76x6\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.488949 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.489054 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.521682 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c76x6\" (UniqueName: \"kubernetes.io/projected/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-kube-api-access-c76x6\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:07 crc kubenswrapper[4928]: I1122 00:18:07.685073 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:08 crc kubenswrapper[4928]: I1122 00:18:08.062597 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv"] Nov 22 00:18:08 crc kubenswrapper[4928]: W1122 00:18:08.074243 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5d282e4_90d5_40c4_a8a2_2a34a55fe743.slice/crio-de0bbc65a139c2bbe042dddcaaaa79b4e32ab21bee2911caec9ca29764486f3d WatchSource:0}: Error finding container de0bbc65a139c2bbe042dddcaaaa79b4e32ab21bee2911caec9ca29764486f3d: Status 404 returned error can't find the container with id de0bbc65a139c2bbe042dddcaaaa79b4e32ab21bee2911caec9ca29764486f3d Nov 22 00:18:08 crc kubenswrapper[4928]: I1122 00:18:08.892121 4928 generic.go:334] "Generic (PLEG): container finished" podID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerID="b17357d381a6d79437f3d8bb671d871386dc3ea32d6a2d8012260e8ef39f7e72" exitCode=0 Nov 22 00:18:08 crc kubenswrapper[4928]: I1122 00:18:08.892381 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" event={"ID":"c5d282e4-90d5-40c4-a8a2-2a34a55fe743","Type":"ContainerDied","Data":"b17357d381a6d79437f3d8bb671d871386dc3ea32d6a2d8012260e8ef39f7e72"} Nov 22 00:18:08 crc kubenswrapper[4928]: I1122 00:18:08.892480 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" event={"ID":"c5d282e4-90d5-40c4-a8a2-2a34a55fe743","Type":"ContainerStarted","Data":"de0bbc65a139c2bbe042dddcaaaa79b4e32ab21bee2911caec9ca29764486f3d"} Nov 22 00:18:08 crc kubenswrapper[4928]: I1122 00:18:08.894315 4928 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 00:18:10 crc kubenswrapper[4928]: I1122 00:18:10.908373 4928 generic.go:334] "Generic (PLEG): container finished" podID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerID="81595340b2799b210ea88d4aa8ef201199ab4ecb35e13fcbd6d123a7ccb3310d" exitCode=0 Nov 22 00:18:10 crc kubenswrapper[4928]: I1122 00:18:10.908513 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" event={"ID":"c5d282e4-90d5-40c4-a8a2-2a34a55fe743","Type":"ContainerDied","Data":"81595340b2799b210ea88d4aa8ef201199ab4ecb35e13fcbd6d123a7ccb3310d"} Nov 22 00:18:11 crc kubenswrapper[4928]: I1122 00:18:11.917967 4928 generic.go:334] "Generic (PLEG): container finished" podID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerID="68317ae8bc2d2227a935df8c1fcb010407bd95aa84e25e622a1508c6d52c297e" exitCode=0 Nov 22 00:18:11 crc kubenswrapper[4928]: I1122 00:18:11.918098 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" event={"ID":"c5d282e4-90d5-40c4-a8a2-2a34a55fe743","Type":"ContainerDied","Data":"68317ae8bc2d2227a935df8c1fcb010407bd95aa84e25e622a1508c6d52c297e"} Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.157741 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.258260 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-bundle\") pod \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.258333 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-util\") pod \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.258419 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c76x6\" (UniqueName: \"kubernetes.io/projected/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-kube-api-access-c76x6\") pod \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\" (UID: \"c5d282e4-90d5-40c4-a8a2-2a34a55fe743\") " Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.261097 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-bundle" (OuterVolumeSpecName: "bundle") pod "c5d282e4-90d5-40c4-a8a2-2a34a55fe743" (UID: "c5d282e4-90d5-40c4-a8a2-2a34a55fe743"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.263677 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-kube-api-access-c76x6" (OuterVolumeSpecName: "kube-api-access-c76x6") pod "c5d282e4-90d5-40c4-a8a2-2a34a55fe743" (UID: "c5d282e4-90d5-40c4-a8a2-2a34a55fe743"). InnerVolumeSpecName "kube-api-access-c76x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.272146 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-util" (OuterVolumeSpecName: "util") pod "c5d282e4-90d5-40c4-a8a2-2a34a55fe743" (UID: "c5d282e4-90d5-40c4-a8a2-2a34a55fe743"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.360089 4928 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.360120 4928 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-util\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.360134 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c76x6\" (UniqueName: \"kubernetes.io/projected/c5d282e4-90d5-40c4-a8a2-2a34a55fe743-kube-api-access-c76x6\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.931201 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" event={"ID":"c5d282e4-90d5-40c4-a8a2-2a34a55fe743","Type":"ContainerDied","Data":"de0bbc65a139c2bbe042dddcaaaa79b4e32ab21bee2911caec9ca29764486f3d"} Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.931246 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0bbc65a139c2bbe042dddcaaaa79b4e32ab21bee2911caec9ca29764486f3d" Nov 22 00:18:13 crc kubenswrapper[4928]: I1122 00:18:13.931248 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.159064 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh"] Nov 22 00:18:14 crc kubenswrapper[4928]: E1122 00:18:14.159272 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerName="extract" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.159284 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerName="extract" Nov 22 00:18:14 crc kubenswrapper[4928]: E1122 00:18:14.159296 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerName="util" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.159303 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerName="util" Nov 22 00:18:14 crc kubenswrapper[4928]: E1122 00:18:14.159320 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerName="pull" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.159326 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerName="pull" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.159402 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d282e4-90d5-40c4-a8a2-2a34a55fe743" containerName="extract" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.160231 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.162193 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.168803 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh"] Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.272318 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.272387 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.272494 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpss\" (UniqueName: \"kubernetes.io/projected/3c84ac41-e8e8-486b-ad01-5e453477cf90-kube-api-access-bqpss\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.373329 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpss\" (UniqueName: \"kubernetes.io/projected/3c84ac41-e8e8-486b-ad01-5e453477cf90-kube-api-access-bqpss\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.373438 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.373506 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.374007 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.374172 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.389605 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpss\" (UniqueName: \"kubernetes.io/projected/3c84ac41-e8e8-486b-ad01-5e453477cf90-kube-api-access-bqpss\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.481238 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.661935 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh"] Nov 22 00:18:14 crc kubenswrapper[4928]: W1122 00:18:14.669053 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c84ac41_e8e8_486b_ad01_5e453477cf90.slice/crio-c0ef92bfb7e33630d4d8e52888fb08b0366f60cbd2ca865e8b569824d1e1ed86 WatchSource:0}: Error finding container c0ef92bfb7e33630d4d8e52888fb08b0366f60cbd2ca865e8b569824d1e1ed86: Status 404 returned error can't find the container with id c0ef92bfb7e33630d4d8e52888fb08b0366f60cbd2ca865e8b569824d1e1ed86 Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.937714 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" event={"ID":"3c84ac41-e8e8-486b-ad01-5e453477cf90","Type":"ContainerStarted","Data":"4d503ab2005b5354a2d48a855dbea4710bf79de57dd2826c5c791d1b779c02a0"} Nov 22 00:18:14 crc kubenswrapper[4928]: I1122 00:18:14.938085 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" event={"ID":"3c84ac41-e8e8-486b-ad01-5e453477cf90","Type":"ContainerStarted","Data":"c0ef92bfb7e33630d4d8e52888fb08b0366f60cbd2ca865e8b569824d1e1ed86"} Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.163967 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6"] Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.166671 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.171338 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6"] Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.338901 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.338975 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.339005 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jchf\" (UniqueName: \"kubernetes.io/projected/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-kube-api-access-4jchf\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.440731 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.440849 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.440877 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jchf\" (UniqueName: \"kubernetes.io/projected/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-kube-api-access-4jchf\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.441630 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.441947 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.462178 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jchf\" (UniqueName: \"kubernetes.io/projected/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-kube-api-access-4jchf\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.558576 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.930111 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6"] Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.946093 4928 generic.go:334] "Generic (PLEG): container finished" podID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerID="4d503ab2005b5354a2d48a855dbea4710bf79de57dd2826c5c791d1b779c02a0" exitCode=0 Nov 22 00:18:15 crc kubenswrapper[4928]: I1122 00:18:15.946155 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" event={"ID":"3c84ac41-e8e8-486b-ad01-5e453477cf90","Type":"ContainerDied","Data":"4d503ab2005b5354a2d48a855dbea4710bf79de57dd2826c5c791d1b779c02a0"} Nov 22 00:18:16 crc kubenswrapper[4928]: I1122 00:18:16.952091 4928 generic.go:334] "Generic (PLEG): container finished" podID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerID="adb3fda5d5aa806c3fe28be7ed59820e99a6be7532e6272a188cd7bf33d30969" exitCode=0 Nov 22 00:18:16 crc kubenswrapper[4928]: I1122 00:18:16.952210 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" event={"ID":"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0","Type":"ContainerDied","Data":"adb3fda5d5aa806c3fe28be7ed59820e99a6be7532e6272a188cd7bf33d30969"} Nov 22 00:18:16 crc kubenswrapper[4928]: I1122 00:18:16.952419 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" event={"ID":"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0","Type":"ContainerStarted","Data":"cb4c5aba63b651fa4023a185d9224239dce75a8e5dcb9a87cdf3eff38098a0b1"} Nov 22 00:18:19 crc kubenswrapper[4928]: I1122 00:18:19.970505 4928 generic.go:334] "Generic (PLEG): container finished" podID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerID="eeeff15375057cef898e2d9d151d6c03d7e1f9d6f222b2d7602af75c129b97a6" exitCode=0 Nov 22 00:18:19 crc kubenswrapper[4928]: I1122 00:18:19.970557 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" event={"ID":"3c84ac41-e8e8-486b-ad01-5e453477cf90","Type":"ContainerDied","Data":"eeeff15375057cef898e2d9d151d6c03d7e1f9d6f222b2d7602af75c129b97a6"} Nov 22 00:18:20 crc kubenswrapper[4928]: I1122 00:18:20.977184 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" event={"ID":"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0","Type":"ContainerStarted","Data":"6c62084821752ccdaa01d1fb5d1e37fd76b59ebc2442d3e6f6d1d9ec11130489"} Nov 22 00:18:21 crc kubenswrapper[4928]: I1122 00:18:21.984662 4928 generic.go:334] "Generic (PLEG): container finished" podID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerID="9fe7fea42dc1eae33d56b188e227e89e59f3e9b21914aee9e4bb68a27bda9e6b" exitCode=0 Nov 22 00:18:21 crc kubenswrapper[4928]: I1122 00:18:21.984741 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" event={"ID":"3c84ac41-e8e8-486b-ad01-5e453477cf90","Type":"ContainerDied","Data":"9fe7fea42dc1eae33d56b188e227e89e59f3e9b21914aee9e4bb68a27bda9e6b"} Nov 22 00:18:21 crc kubenswrapper[4928]: I1122 00:18:21.986328 4928 generic.go:334] "Generic (PLEG): container finished" podID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerID="6c62084821752ccdaa01d1fb5d1e37fd76b59ebc2442d3e6f6d1d9ec11130489" exitCode=0 Nov 22 00:18:21 crc kubenswrapper[4928]: I1122 00:18:21.986358 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" event={"ID":"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0","Type":"ContainerDied","Data":"6c62084821752ccdaa01d1fb5d1e37fd76b59ebc2442d3e6f6d1d9ec11130489"} Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.214820 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc"] Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.216155 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.234710 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc"] Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.324542 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.324646 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.324676 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8smf\" (UniqueName: \"kubernetes.io/projected/136db455-41e9-435d-8628-2674862c3ec9-kube-api-access-h8smf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.426002 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.426104 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.426132 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8smf\" (UniqueName: \"kubernetes.io/projected/136db455-41e9-435d-8628-2674862c3ec9-kube-api-access-h8smf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.427000 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.427272 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.470336 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8smf\" (UniqueName: \"kubernetes.io/projected/136db455-41e9-435d-8628-2674862c3ec9-kube-api-access-h8smf\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.529216 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.889959 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc"] Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.994112 4928 generic.go:334] "Generic (PLEG): container finished" podID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerID="8a230ae232c7869c531963dabbdf6156991c98d36b15254ff6b5c0312a9bdb3d" exitCode=0 Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.994172 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" event={"ID":"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0","Type":"ContainerDied","Data":"8a230ae232c7869c531963dabbdf6156991c98d36b15254ff6b5c0312a9bdb3d"} Nov 22 00:18:22 crc kubenswrapper[4928]: I1122 00:18:22.995306 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" event={"ID":"136db455-41e9-435d-8628-2674862c3ec9","Type":"ContainerStarted","Data":"dd79432c074a9ba6c156fe22ff833e26b8a6acf202bc050a8e41c8b62146c16e"} Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.407664 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.551304 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-util\") pod \"3c84ac41-e8e8-486b-ad01-5e453477cf90\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.551414 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqpss\" (UniqueName: \"kubernetes.io/projected/3c84ac41-e8e8-486b-ad01-5e453477cf90-kube-api-access-bqpss\") pod \"3c84ac41-e8e8-486b-ad01-5e453477cf90\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.551454 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-bundle\") pod \"3c84ac41-e8e8-486b-ad01-5e453477cf90\" (UID: \"3c84ac41-e8e8-486b-ad01-5e453477cf90\") " Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.552620 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-bundle" (OuterVolumeSpecName: "bundle") pod "3c84ac41-e8e8-486b-ad01-5e453477cf90" (UID: "3c84ac41-e8e8-486b-ad01-5e453477cf90"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.559166 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c84ac41-e8e8-486b-ad01-5e453477cf90-kube-api-access-bqpss" (OuterVolumeSpecName: "kube-api-access-bqpss") pod "3c84ac41-e8e8-486b-ad01-5e453477cf90" (UID: "3c84ac41-e8e8-486b-ad01-5e453477cf90"). InnerVolumeSpecName "kube-api-access-bqpss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.565085 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-util" (OuterVolumeSpecName: "util") pod "3c84ac41-e8e8-486b-ad01-5e453477cf90" (UID: "3c84ac41-e8e8-486b-ad01-5e453477cf90"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.652841 4928 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-util\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.652882 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqpss\" (UniqueName: \"kubernetes.io/projected/3c84ac41-e8e8-486b-ad01-5e453477cf90-kube-api-access-bqpss\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:23 crc kubenswrapper[4928]: I1122 00:18:23.652899 4928 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c84ac41-e8e8-486b-ad01-5e453477cf90-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.001781 4928 generic.go:334] "Generic (PLEG): container finished" podID="136db455-41e9-435d-8628-2674862c3ec9" containerID="ff223a48ed94647f5a9a897eb03b3aedb36af3e80bd0f787ad4b9b91f777985d" exitCode=0 Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.001901 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" event={"ID":"136db455-41e9-435d-8628-2674862c3ec9","Type":"ContainerDied","Data":"ff223a48ed94647f5a9a897eb03b3aedb36af3e80bd0f787ad4b9b91f777985d"} Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.004746 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.005391 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh" event={"ID":"3c84ac41-e8e8-486b-ad01-5e453477cf90","Type":"ContainerDied","Data":"c0ef92bfb7e33630d4d8e52888fb08b0366f60cbd2ca865e8b569824d1e1ed86"} Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.005417 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ef92bfb7e33630d4d8e52888fb08b0366f60cbd2ca865e8b569824d1e1ed86" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.245855 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.360139 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-util\") pod \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.360193 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-bundle\") pod \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.360218 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jchf\" (UniqueName: \"kubernetes.io/projected/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-kube-api-access-4jchf\") pod \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\" (UID: \"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0\") " Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.360702 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-bundle" (OuterVolumeSpecName: "bundle") pod "3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" (UID: "3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.365899 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-kube-api-access-4jchf" (OuterVolumeSpecName: "kube-api-access-4jchf") pod "3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" (UID: "3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0"). InnerVolumeSpecName "kube-api-access-4jchf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.376503 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-util" (OuterVolumeSpecName: "util") pod "3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" (UID: "3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.461069 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jchf\" (UniqueName: \"kubernetes.io/projected/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-kube-api-access-4jchf\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.461113 4928 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-util\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:24 crc kubenswrapper[4928]: I1122 00:18:24.461124 4928 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.021669 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" event={"ID":"3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0","Type":"ContainerDied","Data":"cb4c5aba63b651fa4023a185d9224239dce75a8e5dcb9a87cdf3eff38098a0b1"} Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.022184 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb4c5aba63b651fa4023a185d9224239dce75a8e5dcb9a87cdf3eff38098a0b1" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.021763 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.708472 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq"] Nov 22 00:18:25 crc kubenswrapper[4928]: E1122 00:18:25.708744 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerName="pull" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.708773 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerName="pull" Nov 22 00:18:25 crc kubenswrapper[4928]: E1122 00:18:25.708810 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerName="util" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.708821 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerName="util" Nov 22 00:18:25 crc kubenswrapper[4928]: E1122 00:18:25.708840 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerName="pull" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.708850 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerName="pull" Nov 22 00:18:25 crc kubenswrapper[4928]: E1122 00:18:25.708863 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerName="extract" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.708872 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerName="extract" Nov 22 00:18:25 crc kubenswrapper[4928]: E1122 00:18:25.708886 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerName="extract" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.708895 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerName="extract" Nov 22 00:18:25 crc kubenswrapper[4928]: E1122 00:18:25.708908 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerName="util" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.708916 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerName="util" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.709027 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0" containerName="extract" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.709043 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c84ac41-e8e8-486b-ad01-5e453477cf90" containerName="extract" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.709487 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.713122 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-95cdh" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.713476 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.713752 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.726608 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq"] Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.819197 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4"] Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.819896 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.823207 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.823421 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g4jhg" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.835111 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv"] Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.835960 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.841137 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4"] Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.875407 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlqdc\" (UniqueName: \"kubernetes.io/projected/f09d8639-7745-440c-a21b-5008c071fd0b-kube-api-access-tlqdc\") pod \"obo-prometheus-operator-668cf9dfbb-bxsxq\" (UID: \"f09d8639-7745-440c-a21b-5008c071fd0b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.906131 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv"] Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.976998 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlqdc\" (UniqueName: \"kubernetes.io/projected/f09d8639-7745-440c-a21b-5008c071fd0b-kube-api-access-tlqdc\") pod \"obo-prometheus-operator-668cf9dfbb-bxsxq\" (UID: \"f09d8639-7745-440c-a21b-5008c071fd0b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.977054 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dd2bccc-816e-41de-906e-3541bfebaa24-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv\" (UID: \"2dd2bccc-816e-41de-906e-3541bfebaa24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.977079 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dd2bccc-816e-41de-906e-3541bfebaa24-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv\" (UID: \"2dd2bccc-816e-41de-906e-3541bfebaa24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.977108 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/645d1f1b-9079-4a84-bf38-f1029dd01b07-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4\" (UID: \"645d1f1b-9079-4a84-bf38-f1029dd01b07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.977126 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/645d1f1b-9079-4a84-bf38-f1029dd01b07-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4\" (UID: \"645d1f1b-9079-4a84-bf38-f1029dd01b07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:25 crc kubenswrapper[4928]: I1122 00:18:25.997834 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlqdc\" (UniqueName: \"kubernetes.io/projected/f09d8639-7745-440c-a21b-5008c071fd0b-kube-api-access-tlqdc\") pod \"obo-prometheus-operator-668cf9dfbb-bxsxq\" (UID: \"f09d8639-7745-440c-a21b-5008c071fd0b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.027125 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.036585 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-ntd99"] Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.037440 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.039087 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-gxf4d" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.039324 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.060441 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-ntd99"] Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.078248 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dd2bccc-816e-41de-906e-3541bfebaa24-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv\" (UID: \"2dd2bccc-816e-41de-906e-3541bfebaa24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.078302 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dd2bccc-816e-41de-906e-3541bfebaa24-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv\" (UID: \"2dd2bccc-816e-41de-906e-3541bfebaa24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.078340 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/645d1f1b-9079-4a84-bf38-f1029dd01b07-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4\" (UID: \"645d1f1b-9079-4a84-bf38-f1029dd01b07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.078364 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/645d1f1b-9079-4a84-bf38-f1029dd01b07-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4\" (UID: \"645d1f1b-9079-4a84-bf38-f1029dd01b07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.083347 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dd2bccc-816e-41de-906e-3541bfebaa24-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv\" (UID: \"2dd2bccc-816e-41de-906e-3541bfebaa24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.090451 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dd2bccc-816e-41de-906e-3541bfebaa24-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv\" (UID: \"2dd2bccc-816e-41de-906e-3541bfebaa24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.095393 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/645d1f1b-9079-4a84-bf38-f1029dd01b07-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4\" (UID: \"645d1f1b-9079-4a84-bf38-f1029dd01b07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.095521 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/645d1f1b-9079-4a84-bf38-f1029dd01b07-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4\" (UID: \"645d1f1b-9079-4a84-bf38-f1029dd01b07\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.136137 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.158201 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.182498 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c771d376-6f26-4072-936f-e8a81dfa2596-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-ntd99\" (UID: \"c771d376-6f26-4072-936f-e8a81dfa2596\") " pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.182590 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5qr\" (UniqueName: \"kubernetes.io/projected/c771d376-6f26-4072-936f-e8a81dfa2596-kube-api-access-zl5qr\") pod \"observability-operator-d8bb48f5d-ntd99\" (UID: \"c771d376-6f26-4072-936f-e8a81dfa2596\") " pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.190590 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-w6578"] Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.191260 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.195173 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-979jk" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.244734 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-w6578"] Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.290419 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8xs\" (UniqueName: \"kubernetes.io/projected/6ed2e567-35f6-4eba-992b-0e94c74db92e-kube-api-access-8t8xs\") pod \"perses-operator-5446b9c989-w6578\" (UID: \"6ed2e567-35f6-4eba-992b-0e94c74db92e\") " pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.290474 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ed2e567-35f6-4eba-992b-0e94c74db92e-openshift-service-ca\") pod \"perses-operator-5446b9c989-w6578\" (UID: \"6ed2e567-35f6-4eba-992b-0e94c74db92e\") " pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.290513 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c771d376-6f26-4072-936f-e8a81dfa2596-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-ntd99\" (UID: \"c771d376-6f26-4072-936f-e8a81dfa2596\") " pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.290561 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5qr\" (UniqueName: \"kubernetes.io/projected/c771d376-6f26-4072-936f-e8a81dfa2596-kube-api-access-zl5qr\") pod \"observability-operator-d8bb48f5d-ntd99\" (UID: \"c771d376-6f26-4072-936f-e8a81dfa2596\") " pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.294408 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c771d376-6f26-4072-936f-e8a81dfa2596-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-ntd99\" (UID: \"c771d376-6f26-4072-936f-e8a81dfa2596\") " pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.321477 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5qr\" (UniqueName: \"kubernetes.io/projected/c771d376-6f26-4072-936f-e8a81dfa2596-kube-api-access-zl5qr\") pod \"observability-operator-d8bb48f5d-ntd99\" (UID: \"c771d376-6f26-4072-936f-e8a81dfa2596\") " pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.391910 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8xs\" (UniqueName: \"kubernetes.io/projected/6ed2e567-35f6-4eba-992b-0e94c74db92e-kube-api-access-8t8xs\") pod \"perses-operator-5446b9c989-w6578\" (UID: \"6ed2e567-35f6-4eba-992b-0e94c74db92e\") " pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.392231 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ed2e567-35f6-4eba-992b-0e94c74db92e-openshift-service-ca\") pod \"perses-operator-5446b9c989-w6578\" (UID: \"6ed2e567-35f6-4eba-992b-0e94c74db92e\") " pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.393106 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ed2e567-35f6-4eba-992b-0e94c74db92e-openshift-service-ca\") pod \"perses-operator-5446b9c989-w6578\" (UID: \"6ed2e567-35f6-4eba-992b-0e94c74db92e\") " pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.414628 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8xs\" (UniqueName: \"kubernetes.io/projected/6ed2e567-35f6-4eba-992b-0e94c74db92e-kube-api-access-8t8xs\") pod \"perses-operator-5446b9c989-w6578\" (UID: \"6ed2e567-35f6-4eba-992b-0e94c74db92e\") " pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.463342 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.472457 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq"] Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.515692 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.571934 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4"] Nov 22 00:18:26 crc kubenswrapper[4928]: W1122 00:18:26.591092 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645d1f1b_9079_4a84_bf38_f1029dd01b07.slice/crio-ea55c8a7fc2c93cb5561d64f276c693cac144d431b23bd7d3872b02660d8a015 WatchSource:0}: Error finding container ea55c8a7fc2c93cb5561d64f276c693cac144d431b23bd7d3872b02660d8a015: Status 404 returned error can't find the container with id ea55c8a7fc2c93cb5561d64f276c693cac144d431b23bd7d3872b02660d8a015 Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.613558 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv"] Nov 22 00:18:26 crc kubenswrapper[4928]: W1122 00:18:26.622387 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dd2bccc_816e_41de_906e_3541bfebaa24.slice/crio-6ad3edfca8fd57d1b28a42d35a1cdad8db7e4004c23f17c01bfeea05d9bde3f3 WatchSource:0}: Error finding container 6ad3edfca8fd57d1b28a42d35a1cdad8db7e4004c23f17c01bfeea05d9bde3f3: Status 404 returned error can't find the container with id 6ad3edfca8fd57d1b28a42d35a1cdad8db7e4004c23f17c01bfeea05d9bde3f3 Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.844074 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-w6578"] Nov 22 00:18:26 crc kubenswrapper[4928]: W1122 00:18:26.853570 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed2e567_35f6_4eba_992b_0e94c74db92e.slice/crio-94753e176b4b83f893824d0a81bd833f73b2733da72bed822a025354bcd9e8ee WatchSource:0}: Error finding container 94753e176b4b83f893824d0a81bd833f73b2733da72bed822a025354bcd9e8ee: Status 404 returned error can't find the container with id 94753e176b4b83f893824d0a81bd833f73b2733da72bed822a025354bcd9e8ee Nov 22 00:18:26 crc kubenswrapper[4928]: I1122 00:18:26.949421 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-ntd99"] Nov 22 00:18:26 crc kubenswrapper[4928]: W1122 00:18:26.959700 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc771d376_6f26_4072_936f_e8a81dfa2596.slice/crio-267de19f5e85f673e04e3f25335c36c1d599bd2ead666ebb44446a281925a370 WatchSource:0}: Error finding container 267de19f5e85f673e04e3f25335c36c1d599bd2ead666ebb44446a281925a370: Status 404 returned error can't find the container with id 267de19f5e85f673e04e3f25335c36c1d599bd2ead666ebb44446a281925a370 Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.032554 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" event={"ID":"c771d376-6f26-4072-936f-e8a81dfa2596","Type":"ContainerStarted","Data":"267de19f5e85f673e04e3f25335c36c1d599bd2ead666ebb44446a281925a370"} Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.033815 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" event={"ID":"645d1f1b-9079-4a84-bf38-f1029dd01b07","Type":"ContainerStarted","Data":"ea55c8a7fc2c93cb5561d64f276c693cac144d431b23bd7d3872b02660d8a015"} Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.034973 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" event={"ID":"f09d8639-7745-440c-a21b-5008c071fd0b","Type":"ContainerStarted","Data":"b38ba2751ffcc8a2b31e0d5a4f9716939d8c24156b31e520a744feab5abc5838"} Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.036009 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" event={"ID":"2dd2bccc-816e-41de-906e-3541bfebaa24","Type":"ContainerStarted","Data":"6ad3edfca8fd57d1b28a42d35a1cdad8db7e4004c23f17c01bfeea05d9bde3f3"} Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.036917 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-w6578" event={"ID":"6ed2e567-35f6-4eba-992b-0e94c74db92e","Type":"ContainerStarted","Data":"94753e176b4b83f893824d0a81bd833f73b2733da72bed822a025354bcd9e8ee"} Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.772387 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnm59"] Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.772581 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" podUID="4f50294b-9edd-469f-8873-aeb445d7653b" containerName="controller-manager" containerID="cri-o://f569794b9d52eb63d8a588a84a5de3244e70f72f322d1a290059c03efa7a9b7f" gracePeriod=30 Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.879307 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5"] Nov 22 00:18:27 crc kubenswrapper[4928]: I1122 00:18:27.879929 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" podUID="5b36deea-9635-49f8-862a-b9067233fd24" containerName="route-controller-manager" containerID="cri-o://9ae7cee71b5c7caec818b68e051ae0c426146818767717f4470ee4633e571d6f" gracePeriod=30 Nov 22 00:18:28 crc kubenswrapper[4928]: I1122 00:18:28.074193 4928 generic.go:334] "Generic (PLEG): container finished" podID="4f50294b-9edd-469f-8873-aeb445d7653b" containerID="f569794b9d52eb63d8a588a84a5de3244e70f72f322d1a290059c03efa7a9b7f" exitCode=0 Nov 22 00:18:28 crc kubenswrapper[4928]: I1122 00:18:28.074289 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" event={"ID":"4f50294b-9edd-469f-8873-aeb445d7653b","Type":"ContainerDied","Data":"f569794b9d52eb63d8a588a84a5de3244e70f72f322d1a290059c03efa7a9b7f"} Nov 22 00:18:28 crc kubenswrapper[4928]: I1122 00:18:28.086914 4928 generic.go:334] "Generic (PLEG): container finished" podID="5b36deea-9635-49f8-862a-b9067233fd24" containerID="9ae7cee71b5c7caec818b68e051ae0c426146818767717f4470ee4633e571d6f" exitCode=0 Nov 22 00:18:28 crc kubenswrapper[4928]: I1122 00:18:28.086964 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" event={"ID":"5b36deea-9635-49f8-862a-b9067233fd24","Type":"ContainerDied","Data":"9ae7cee71b5c7caec818b68e051ae0c426146818767717f4470ee4633e571d6f"} Nov 22 00:18:30 crc kubenswrapper[4928]: I1122 00:18:30.242576 4928 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-87wq5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 22 00:18:30 crc kubenswrapper[4928]: I1122 00:18:30.242981 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" podUID="5b36deea-9635-49f8-862a-b9067233fd24" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.045506 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.060211 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.085053 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45"] Nov 22 00:18:31 crc kubenswrapper[4928]: E1122 00:18:31.085390 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b36deea-9635-49f8-862a-b9067233fd24" containerName="route-controller-manager" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.085432 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b36deea-9635-49f8-862a-b9067233fd24" containerName="route-controller-manager" Nov 22 00:18:31 crc kubenswrapper[4928]: E1122 00:18:31.085449 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f50294b-9edd-469f-8873-aeb445d7653b" containerName="controller-manager" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.085456 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f50294b-9edd-469f-8873-aeb445d7653b" containerName="controller-manager" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.085595 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f50294b-9edd-469f-8873-aeb445d7653b" containerName="controller-manager" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.085613 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b36deea-9635-49f8-862a-b9067233fd24" containerName="route-controller-manager" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.086136 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.128419 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45"] Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.134168 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.134189 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnm59" event={"ID":"4f50294b-9edd-469f-8873-aeb445d7653b","Type":"ContainerDied","Data":"02bb5cc8440809c7db6718231606610841a956a939aebbd4ec82278caf54d09f"} Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.134287 4928 scope.go:117] "RemoveContainer" containerID="f569794b9d52eb63d8a588a84a5de3244e70f72f322d1a290059c03efa7a9b7f" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.139718 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" event={"ID":"5b36deea-9635-49f8-862a-b9067233fd24","Type":"ContainerDied","Data":"c0a76ff037d08d37ba7a7d5d9ba17be862f228c2f6a448a04a6fa73b0faa1417"} Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.140151 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.151674 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" event={"ID":"136db455-41e9-435d-8628-2674862c3ec9","Type":"ContainerStarted","Data":"2e14e8ee19ccfab82de3fcaeb9e019bacbf51291186b0c3ffc28d77b5bd56069"} Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.160940 4928 scope.go:117] "RemoveContainer" containerID="9ae7cee71b5c7caec818b68e051ae0c426146818767717f4470ee4633e571d6f" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164075 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-config\") pod \"5b36deea-9635-49f8-862a-b9067233fd24\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164121 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-config\") pod \"4f50294b-9edd-469f-8873-aeb445d7653b\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164173 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-client-ca\") pod \"4f50294b-9edd-469f-8873-aeb445d7653b\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164208 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f50294b-9edd-469f-8873-aeb445d7653b-serving-cert\") pod \"4f50294b-9edd-469f-8873-aeb445d7653b\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164256 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b36deea-9635-49f8-862a-b9067233fd24-serving-cert\") pod \"5b36deea-9635-49f8-862a-b9067233fd24\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164323 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp2wf\" (UniqueName: \"kubernetes.io/projected/4f50294b-9edd-469f-8873-aeb445d7653b-kube-api-access-hp2wf\") pod \"4f50294b-9edd-469f-8873-aeb445d7653b\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164372 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-proxy-ca-bundles\") pod \"4f50294b-9edd-469f-8873-aeb445d7653b\" (UID: \"4f50294b-9edd-469f-8873-aeb445d7653b\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164390 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67b8\" (UniqueName: \"kubernetes.io/projected/5b36deea-9635-49f8-862a-b9067233fd24-kube-api-access-f67b8\") pod \"5b36deea-9635-49f8-862a-b9067233fd24\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164410 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-client-ca\") pod \"5b36deea-9635-49f8-862a-b9067233fd24\" (UID: \"5b36deea-9635-49f8-862a-b9067233fd24\") " Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164575 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020c781c-4012-44e8-84fd-dcdf581bb7e8-config\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164593 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/020c781c-4012-44e8-84fd-dcdf581bb7e8-serving-cert\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164621 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275v9\" (UniqueName: \"kubernetes.io/projected/020c781c-4012-44e8-84fd-dcdf581bb7e8-kube-api-access-275v9\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.164652 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/020c781c-4012-44e8-84fd-dcdf581bb7e8-client-ca\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.165292 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-config" (OuterVolumeSpecName: "config") pod "5b36deea-9635-49f8-862a-b9067233fd24" (UID: "5b36deea-9635-49f8-862a-b9067233fd24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.165732 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-client-ca" (OuterVolumeSpecName: "client-ca") pod "4f50294b-9edd-469f-8873-aeb445d7653b" (UID: "4f50294b-9edd-469f-8873-aeb445d7653b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.165754 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b36deea-9635-49f8-862a-b9067233fd24" (UID: "5b36deea-9635-49f8-862a-b9067233fd24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.165930 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4f50294b-9edd-469f-8873-aeb445d7653b" (UID: "4f50294b-9edd-469f-8873-aeb445d7653b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.165974 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-config" (OuterVolumeSpecName: "config") pod "4f50294b-9edd-469f-8873-aeb445d7653b" (UID: "4f50294b-9edd-469f-8873-aeb445d7653b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.173351 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f50294b-9edd-469f-8873-aeb445d7653b-kube-api-access-hp2wf" (OuterVolumeSpecName: "kube-api-access-hp2wf") pod "4f50294b-9edd-469f-8873-aeb445d7653b" (UID: "4f50294b-9edd-469f-8873-aeb445d7653b"). InnerVolumeSpecName "kube-api-access-hp2wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.175299 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b36deea-9635-49f8-862a-b9067233fd24-kube-api-access-f67b8" (OuterVolumeSpecName: "kube-api-access-f67b8") pod "5b36deea-9635-49f8-862a-b9067233fd24" (UID: "5b36deea-9635-49f8-862a-b9067233fd24"). InnerVolumeSpecName "kube-api-access-f67b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.175713 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b36deea-9635-49f8-862a-b9067233fd24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b36deea-9635-49f8-862a-b9067233fd24" (UID: "5b36deea-9635-49f8-862a-b9067233fd24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.177922 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f50294b-9edd-469f-8873-aeb445d7653b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f50294b-9edd-469f-8873-aeb445d7653b" (UID: "4f50294b-9edd-469f-8873-aeb445d7653b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.265944 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020c781c-4012-44e8-84fd-dcdf581bb7e8-config\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.265998 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/020c781c-4012-44e8-84fd-dcdf581bb7e8-serving-cert\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266041 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275v9\" (UniqueName: \"kubernetes.io/projected/020c781c-4012-44e8-84fd-dcdf581bb7e8-kube-api-access-275v9\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266070 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/020c781c-4012-44e8-84fd-dcdf581bb7e8-client-ca\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266164 4928 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266177 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f50294b-9edd-469f-8873-aeb445d7653b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266190 4928 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b36deea-9635-49f8-862a-b9067233fd24-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266201 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp2wf\" (UniqueName: \"kubernetes.io/projected/4f50294b-9edd-469f-8873-aeb445d7653b-kube-api-access-hp2wf\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266213 4928 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266225 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67b8\" (UniqueName: \"kubernetes.io/projected/5b36deea-9635-49f8-862a-b9067233fd24-kube-api-access-f67b8\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266236 4928 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266246 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b36deea-9635-49f8-862a-b9067233fd24-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.266255 4928 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f50294b-9edd-469f-8873-aeb445d7653b-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.267642 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020c781c-4012-44e8-84fd-dcdf581bb7e8-config\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.268256 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/020c781c-4012-44e8-84fd-dcdf581bb7e8-client-ca\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.270903 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/020c781c-4012-44e8-84fd-dcdf581bb7e8-serving-cert\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.285188 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275v9\" (UniqueName: \"kubernetes.io/projected/020c781c-4012-44e8-84fd-dcdf581bb7e8-kube-api-access-275v9\") pod \"route-controller-manager-d45976c65-q2g45\" (UID: \"020c781c-4012-44e8-84fd-dcdf581bb7e8\") " pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.426552 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.479858 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnm59"] Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.483101 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnm59"] Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.489322 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5"] Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.493344 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-87wq5"] Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.627712 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f50294b-9edd-469f-8873-aeb445d7653b" path="/var/lib/kubelet/pods/4f50294b-9edd-469f-8873-aeb445d7653b/volumes" Nov 22 00:18:31 crc kubenswrapper[4928]: I1122 00:18:31.629233 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b36deea-9635-49f8-862a-b9067233fd24" path="/var/lib/kubelet/pods/5b36deea-9635-49f8-862a-b9067233fd24/volumes" Nov 22 00:18:32 crc kubenswrapper[4928]: I1122 00:18:32.160669 4928 generic.go:334] "Generic (PLEG): container finished" podID="136db455-41e9-435d-8628-2674862c3ec9" containerID="2e14e8ee19ccfab82de3fcaeb9e019bacbf51291186b0c3ffc28d77b5bd56069" exitCode=0 Nov 22 00:18:32 crc kubenswrapper[4928]: I1122 00:18:32.160717 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" event={"ID":"136db455-41e9-435d-8628-2674862c3ec9","Type":"ContainerDied","Data":"2e14e8ee19ccfab82de3fcaeb9e019bacbf51291186b0c3ffc28d77b5bd56069"} Nov 22 00:18:32 crc kubenswrapper[4928]: I1122 00:18:32.582553 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45"] Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.178051 4928 generic.go:334] "Generic (PLEG): container finished" podID="136db455-41e9-435d-8628-2674862c3ec9" containerID="d78bda391d9ba8c5307152cb8013baf55fb261d03cfa1a5f9e4b7a645679a6ae" exitCode=0 Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.178433 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" event={"ID":"136db455-41e9-435d-8628-2674862c3ec9","Type":"ContainerDied","Data":"d78bda391d9ba8c5307152cb8013baf55fb261d03cfa1a5f9e4b7a645679a6ae"} Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.501919 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-vcqqh"] Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.502557 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.505568 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.506296 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.520680 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-vcqqh"] Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.601612 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbtnx\" (UniqueName: \"kubernetes.io/projected/a04bec7c-e880-4b00-a112-23c3809fa0f6-kube-api-access-tbtnx\") pod \"interconnect-operator-5bb49f789d-vcqqh\" (UID: \"a04bec7c-e880-4b00-a112-23c3809fa0f6\") " pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.703005 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbtnx\" (UniqueName: \"kubernetes.io/projected/a04bec7c-e880-4b00-a112-23c3809fa0f6-kube-api-access-tbtnx\") pod \"interconnect-operator-5bb49f789d-vcqqh\" (UID: \"a04bec7c-e880-4b00-a112-23c3809fa0f6\") " pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.746397 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbtnx\" (UniqueName: \"kubernetes.io/projected/a04bec7c-e880-4b00-a112-23c3809fa0f6-kube-api-access-tbtnx\") pod \"interconnect-operator-5bb49f789d-vcqqh\" (UID: \"a04bec7c-e880-4b00-a112-23c3809fa0f6\") " pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.819073 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.921567 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-666888dcd8-6lp2q"] Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.928043 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.932108 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.932744 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.932968 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.933097 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.933201 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.934946 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.938074 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-666888dcd8-6lp2q"] Nov 22 00:18:33 crc kubenswrapper[4928]: I1122 00:18:33.940498 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.008648 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-config\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.008729 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-client-ca\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.008786 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6wv\" (UniqueName: \"kubernetes.io/projected/3630302f-db1c-4048-a470-20a9d2246217-kube-api-access-vd6wv\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.008900 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3630302f-db1c-4048-a470-20a9d2246217-serving-cert\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.008946 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-proxy-ca-bundles\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.109972 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3630302f-db1c-4048-a470-20a9d2246217-serving-cert\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.110029 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-proxy-ca-bundles\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.110103 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-config\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.110149 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-client-ca\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.110185 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6wv\" (UniqueName: \"kubernetes.io/projected/3630302f-db1c-4048-a470-20a9d2246217-kube-api-access-vd6wv\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.111285 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-client-ca\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.111606 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-proxy-ca-bundles\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.111839 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3630302f-db1c-4048-a470-20a9d2246217-config\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.131350 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6wv\" (UniqueName: \"kubernetes.io/projected/3630302f-db1c-4048-a470-20a9d2246217-kube-api-access-vd6wv\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.140568 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3630302f-db1c-4048-a470-20a9d2246217-serving-cert\") pod \"controller-manager-666888dcd8-6lp2q\" (UID: \"3630302f-db1c-4048-a470-20a9d2246217\") " pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.249086 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.367227 4928 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.720343 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zwl4z"] Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.721480 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.730186 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwl4z"] Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.819811 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-catalog-content\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.819858 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-utilities\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.819897 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks92\" (UniqueName: \"kubernetes.io/projected/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-kube-api-access-dks92\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.921619 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-catalog-content\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.921703 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-utilities\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.921764 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dks92\" (UniqueName: \"kubernetes.io/projected/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-kube-api-access-dks92\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.922939 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-catalog-content\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.923039 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-utilities\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:34 crc kubenswrapper[4928]: I1122 00:18:34.941850 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dks92\" (UniqueName: \"kubernetes.io/projected/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-kube-api-access-dks92\") pod \"redhat-operators-zwl4z\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:35 crc kubenswrapper[4928]: I1122 00:18:35.040512 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:36 crc kubenswrapper[4928]: I1122 00:18:36.707367 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:18:36 crc kubenswrapper[4928]: I1122 00:18:36.707689 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:18:36 crc kubenswrapper[4928]: I1122 00:18:36.995711 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7f9cbb56b6-kpdz8"] Nov 22 00:18:36 crc kubenswrapper[4928]: I1122 00:18:36.996566 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:36 crc kubenswrapper[4928]: I1122 00:18:36.999154 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.021226 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7f9cbb56b6-kpdz8"] Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.051664 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-webhook-cert\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.051730 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x784\" (UniqueName: \"kubernetes.io/projected/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-kube-api-access-4x784\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.051774 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-apiservice-cert\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.152971 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-webhook-cert\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.153053 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x784\" (UniqueName: \"kubernetes.io/projected/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-kube-api-access-4x784\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.153094 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-apiservice-cert\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.157647 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-apiservice-cert\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.159329 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-webhook-cert\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.175063 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x784\" (UniqueName: \"kubernetes.io/projected/8f1b9b0b-2f7c-4600-9f1b-57a2308b8625-kube-api-access-4x784\") pod \"elastic-operator-7f9cbb56b6-kpdz8\" (UID: \"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625\") " pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:37 crc kubenswrapper[4928]: I1122 00:18:37.314666 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" Nov 22 00:18:38 crc kubenswrapper[4928]: W1122 00:18:38.233234 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod020c781c_4012_44e8_84fd_dcdf581bb7e8.slice/crio-6bf83d7f0be405ad89bee45936bfee1451a0160e5e65583e82302e5a62f0553c WatchSource:0}: Error finding container 6bf83d7f0be405ad89bee45936bfee1451a0160e5e65583e82302e5a62f0553c: Status 404 returned error can't find the container with id 6bf83d7f0be405ad89bee45936bfee1451a0160e5e65583e82302e5a62f0553c Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.286656 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.369612 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8smf\" (UniqueName: \"kubernetes.io/projected/136db455-41e9-435d-8628-2674862c3ec9-kube-api-access-h8smf\") pod \"136db455-41e9-435d-8628-2674862c3ec9\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.369691 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-bundle\") pod \"136db455-41e9-435d-8628-2674862c3ec9\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.369747 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-util\") pod \"136db455-41e9-435d-8628-2674862c3ec9\" (UID: \"136db455-41e9-435d-8628-2674862c3ec9\") " Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.371023 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-bundle" (OuterVolumeSpecName: "bundle") pod "136db455-41e9-435d-8628-2674862c3ec9" (UID: "136db455-41e9-435d-8628-2674862c3ec9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.384871 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-util" (OuterVolumeSpecName: "util") pod "136db455-41e9-435d-8628-2674862c3ec9" (UID: "136db455-41e9-435d-8628-2674862c3ec9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.387947 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/136db455-41e9-435d-8628-2674862c3ec9-kube-api-access-h8smf" (OuterVolumeSpecName: "kube-api-access-h8smf") pod "136db455-41e9-435d-8628-2674862c3ec9" (UID: "136db455-41e9-435d-8628-2674862c3ec9"). InnerVolumeSpecName "kube-api-access-h8smf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.471562 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8smf\" (UniqueName: \"kubernetes.io/projected/136db455-41e9-435d-8628-2674862c3ec9-kube-api-access-h8smf\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.471600 4928 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:38 crc kubenswrapper[4928]: I1122 00:18:38.471610 4928 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136db455-41e9-435d-8628-2674862c3ec9-util\") on node \"crc\" DevicePath \"\"" Nov 22 00:18:39 crc kubenswrapper[4928]: I1122 00:18:39.214117 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" event={"ID":"020c781c-4012-44e8-84fd-dcdf581bb7e8","Type":"ContainerStarted","Data":"6bf83d7f0be405ad89bee45936bfee1451a0160e5e65583e82302e5a62f0553c"} Nov 22 00:18:39 crc kubenswrapper[4928]: I1122 00:18:39.215750 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" event={"ID":"136db455-41e9-435d-8628-2674862c3ec9","Type":"ContainerDied","Data":"dd79432c074a9ba6c156fe22ff833e26b8a6acf202bc050a8e41c8b62146c16e"} Nov 22 00:18:39 crc kubenswrapper[4928]: I1122 00:18:39.215880 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd79432c074a9ba6c156fe22ff833e26b8a6acf202bc050a8e41c8b62146c16e" Nov 22 00:18:39 crc kubenswrapper[4928]: I1122 00:18:39.216017 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc" Nov 22 00:18:42 crc kubenswrapper[4928]: E1122 00:18:42.372078 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 22 00:18:42 crc kubenswrapper[4928]: E1122 00:18:42.372460 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv_openshift-operators(2dd2bccc-816e-41de-906e-3541bfebaa24): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 00:18:42 crc kubenswrapper[4928]: E1122 00:18:42.373615 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" podUID="2dd2bccc-816e-41de-906e-3541bfebaa24" Nov 22 00:18:42 crc kubenswrapper[4928]: I1122 00:18:42.750477 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-vcqqh"] Nov 22 00:18:42 crc kubenswrapper[4928]: I1122 00:18:42.865851 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7f9cbb56b6-kpdz8"] Nov 22 00:18:42 crc kubenswrapper[4928]: W1122 00:18:42.887009 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1b9b0b_2f7c_4600_9f1b_57a2308b8625.slice/crio-9fe360b8b32225cf57d775d5bb5a0f74773d543928987b276ef5a21db65102d2 WatchSource:0}: Error finding container 9fe360b8b32225cf57d775d5bb5a0f74773d543928987b276ef5a21db65102d2: Status 404 returned error can't find the container with id 9fe360b8b32225cf57d775d5bb5a0f74773d543928987b276ef5a21db65102d2 Nov 22 00:18:42 crc kubenswrapper[4928]: I1122 00:18:42.887868 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-666888dcd8-6lp2q"] Nov 22 00:18:42 crc kubenswrapper[4928]: W1122 00:18:42.905177 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3630302f_db1c_4048_a470_20a9d2246217.slice/crio-42e1f41a9efcb28946be121415892b251510beba92bf60f2df31b2d8b32806cd WatchSource:0}: Error finding container 42e1f41a9efcb28946be121415892b251510beba92bf60f2df31b2d8b32806cd: Status 404 returned error can't find the container with id 42e1f41a9efcb28946be121415892b251510beba92bf60f2df31b2d8b32806cd Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.171356 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwl4z"] Nov 22 00:18:43 crc kubenswrapper[4928]: W1122 00:18:43.176391 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba620d9b_e9c7_42ca_8cf8_b1eb6728a22d.slice/crio-a44d1e2aab7c3b30aaff1afbd2f8f62cd974e24e28268435583ce2b357ceb807 WatchSource:0}: Error finding container a44d1e2aab7c3b30aaff1afbd2f8f62cd974e24e28268435583ce2b357ceb807: Status 404 returned error can't find the container with id a44d1e2aab7c3b30aaff1afbd2f8f62cd974e24e28268435583ce2b357ceb807 Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.241746 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" event={"ID":"a04bec7c-e880-4b00-a112-23c3809fa0f6","Type":"ContainerStarted","Data":"27cafc3834ffcfb4bf16efd4d73f26934266844a8f4abf48a5aa007639038a23"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.242823 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" event={"ID":"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625","Type":"ContainerStarted","Data":"9fe360b8b32225cf57d775d5bb5a0f74773d543928987b276ef5a21db65102d2"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.248274 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" event={"ID":"c771d376-6f26-4072-936f-e8a81dfa2596","Type":"ContainerStarted","Data":"0214ac93c6526d07b007ab79fe557bd0a80c3c9afd2057153273c559b64713f3"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.248334 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.250444 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" event={"ID":"645d1f1b-9079-4a84-bf38-f1029dd01b07","Type":"ContainerStarted","Data":"35e3187246cecfa2137368690f4528d2d1087cce4daff8ccd9a15b1817bac274"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.252031 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" event={"ID":"3630302f-db1c-4048-a470-20a9d2246217","Type":"ContainerStarted","Data":"ec9b3b3f4b0088dc7722f6bfdd1052652af2ce2803cc06c7b769e89f7e622e89"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.252060 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" event={"ID":"3630302f-db1c-4048-a470-20a9d2246217","Type":"ContainerStarted","Data":"42e1f41a9efcb28946be121415892b251510beba92bf60f2df31b2d8b32806cd"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.252306 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.253401 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwl4z" event={"ID":"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d","Type":"ContainerStarted","Data":"a44d1e2aab7c3b30aaff1afbd2f8f62cd974e24e28268435583ce2b357ceb807"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.254849 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-w6578" event={"ID":"6ed2e567-35f6-4eba-992b-0e94c74db92e","Type":"ContainerStarted","Data":"52967636632d5fc8c829e0c25416ab0543b6594478bddadf10ee19067e815f30"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.254906 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.256669 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.260699 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" event={"ID":"020c781c-4012-44e8-84fd-dcdf581bb7e8","Type":"ContainerStarted","Data":"01fb1cbbcd94aabd2857aec1840d54a90634a61e9a7c2507f2e5317eeaefa0ea"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.260949 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.265140 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" event={"ID":"f09d8639-7745-440c-a21b-5008c071fd0b","Type":"ContainerStarted","Data":"0acdd4df7ae3aa8cbe87994d0a42050d6d1a479028caa6451df70a816c207080"} Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.286643 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" podStartSLOduration=1.588230681 podStartE2EDuration="17.286626863s" podCreationTimestamp="2025-11-22 00:18:26 +0000 UTC" firstStartedPulling="2025-11-22 00:18:26.964567064 +0000 UTC m=+742.252598255" lastFinishedPulling="2025-11-22 00:18:42.662963246 +0000 UTC m=+757.950994437" observedRunningTime="2025-11-22 00:18:43.276574856 +0000 UTC m=+758.564606057" watchObservedRunningTime="2025-11-22 00:18:43.286626863 +0000 UTC m=+758.574658044" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.288110 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-ntd99" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.302149 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-666888dcd8-6lp2q" podStartSLOduration=14.302134876 podStartE2EDuration="14.302134876s" podCreationTimestamp="2025-11-22 00:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:18:43.299052404 +0000 UTC m=+758.587083615" watchObservedRunningTime="2025-11-22 00:18:43.302134876 +0000 UTC m=+758.590166057" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.336021 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" podStartSLOduration=14.336004297 podStartE2EDuration="14.336004297s" podCreationTimestamp="2025-11-22 00:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:18:43.335565346 +0000 UTC m=+758.623596537" watchObservedRunningTime="2025-11-22 00:18:43.336004297 +0000 UTC m=+758.624035478" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.387814 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-bxsxq" podStartSLOduration=2.4313398250000002 podStartE2EDuration="18.387790875s" podCreationTimestamp="2025-11-22 00:18:25 +0000 UTC" firstStartedPulling="2025-11-22 00:18:26.507467459 +0000 UTC m=+741.795498650" lastFinishedPulling="2025-11-22 00:18:42.463918509 +0000 UTC m=+757.751949700" observedRunningTime="2025-11-22 00:18:43.387259521 +0000 UTC m=+758.675290712" watchObservedRunningTime="2025-11-22 00:18:43.387790875 +0000 UTC m=+758.675822076" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.407710 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d45976c65-q2g45" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.467054 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4" podStartSLOduration=2.6103139 podStartE2EDuration="18.467028975s" podCreationTimestamp="2025-11-22 00:18:25 +0000 UTC" firstStartedPulling="2025-11-22 00:18:26.592408839 +0000 UTC m=+741.880440030" lastFinishedPulling="2025-11-22 00:18:42.449123914 +0000 UTC m=+757.737155105" observedRunningTime="2025-11-22 00:18:43.464920228 +0000 UTC m=+758.752951429" watchObservedRunningTime="2025-11-22 00:18:43.467028975 +0000 UTC m=+758.755060166" Nov 22 00:18:43 crc kubenswrapper[4928]: I1122 00:18:43.496601 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-w6578" podStartSLOduration=1.8455006790000001 podStartE2EDuration="17.496583311s" podCreationTimestamp="2025-11-22 00:18:26 +0000 UTC" firstStartedPulling="2025-11-22 00:18:26.857133755 +0000 UTC m=+742.145164946" lastFinishedPulling="2025-11-22 00:18:42.508216387 +0000 UTC m=+757.796247578" observedRunningTime="2025-11-22 00:18:43.493333574 +0000 UTC m=+758.781364775" watchObservedRunningTime="2025-11-22 00:18:43.496583311 +0000 UTC m=+758.784614502" Nov 22 00:18:44 crc kubenswrapper[4928]: I1122 00:18:44.275667 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" event={"ID":"2dd2bccc-816e-41de-906e-3541bfebaa24","Type":"ContainerStarted","Data":"c5f425a0360f545d030a497e924e3748f0ff5a91ff6dadc7de347a321e386c9e"} Nov 22 00:18:44 crc kubenswrapper[4928]: I1122 00:18:44.284193 4928 generic.go:334] "Generic (PLEG): container finished" podID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerID="b26782331d1e1e5ab14e7cd97f7d088a326a3647fc9b389b7a10a4c7f70305fa" exitCode=0 Nov 22 00:18:44 crc kubenswrapper[4928]: I1122 00:18:44.285154 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwl4z" event={"ID":"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d","Type":"ContainerDied","Data":"b26782331d1e1e5ab14e7cd97f7d088a326a3647fc9b389b7a10a4c7f70305fa"} Nov 22 00:18:44 crc kubenswrapper[4928]: I1122 00:18:44.295759 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv" podStartSLOduration=-9223372017.559034 podStartE2EDuration="19.295740779s" podCreationTimestamp="2025-11-22 00:18:25 +0000 UTC" firstStartedPulling="2025-11-22 00:18:26.62737959 +0000 UTC m=+741.915410781" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:18:44.294893016 +0000 UTC m=+759.582924207" watchObservedRunningTime="2025-11-22 00:18:44.295740779 +0000 UTC m=+759.583771970" Nov 22 00:18:47 crc kubenswrapper[4928]: I1122 00:18:47.305389 4928 generic.go:334] "Generic (PLEG): container finished" podID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerID="2844bc9c9a1156bf625ba9cc7d2f78190f162a64cb0624883ff1ee0d76c0f134" exitCode=0 Nov 22 00:18:47 crc kubenswrapper[4928]: I1122 00:18:47.305512 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwl4z" event={"ID":"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d","Type":"ContainerDied","Data":"2844bc9c9a1156bf625ba9cc7d2f78190f162a64cb0624883ff1ee0d76c0f134"} Nov 22 00:18:47 crc kubenswrapper[4928]: I1122 00:18:47.307521 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" event={"ID":"8f1b9b0b-2f7c-4600-9f1b-57a2308b8625","Type":"ContainerStarted","Data":"c0c67503e837829502deaf967a41d4f0599aa1c85769fab33b5536efbfa0f320"} Nov 22 00:18:47 crc kubenswrapper[4928]: I1122 00:18:47.353889 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7f9cbb56b6-kpdz8" podStartSLOduration=7.759425016 podStartE2EDuration="11.353874345s" podCreationTimestamp="2025-11-22 00:18:36 +0000 UTC" firstStartedPulling="2025-11-22 00:18:42.889429473 +0000 UTC m=+758.177460674" lastFinishedPulling="2025-11-22 00:18:46.483878812 +0000 UTC m=+761.771910003" observedRunningTime="2025-11-22 00:18:47.352431727 +0000 UTC m=+762.640462918" watchObservedRunningTime="2025-11-22 00:18:47.353874345 +0000 UTC m=+762.641905536" Nov 22 00:18:48 crc kubenswrapper[4928]: I1122 00:18:48.320206 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwl4z" event={"ID":"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d","Type":"ContainerStarted","Data":"dd0ebf188a58683af5b0f71dff08ac5265941d8e08cb732b716db84cefbb1990"} Nov 22 00:18:48 crc kubenswrapper[4928]: I1122 00:18:48.378994 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zwl4z" podStartSLOduration=11.21980698 podStartE2EDuration="14.378974266s" podCreationTimestamp="2025-11-22 00:18:34 +0000 UTC" firstStartedPulling="2025-11-22 00:18:44.709094659 +0000 UTC m=+759.997125850" lastFinishedPulling="2025-11-22 00:18:47.868261945 +0000 UTC m=+763.156293136" observedRunningTime="2025-11-22 00:18:48.369005381 +0000 UTC m=+763.657036572" watchObservedRunningTime="2025-11-22 00:18:48.378974266 +0000 UTC m=+763.667005457" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.239216 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 22 00:18:53 crc kubenswrapper[4928]: E1122 00:18:53.240015 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136db455-41e9-435d-8628-2674862c3ec9" containerName="pull" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.240103 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="136db455-41e9-435d-8628-2674862c3ec9" containerName="pull" Nov 22 00:18:53 crc kubenswrapper[4928]: E1122 00:18:53.240214 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136db455-41e9-435d-8628-2674862c3ec9" containerName="extract" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.240293 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="136db455-41e9-435d-8628-2674862c3ec9" containerName="extract" Nov 22 00:18:53 crc kubenswrapper[4928]: E1122 00:18:53.240367 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136db455-41e9-435d-8628-2674862c3ec9" containerName="util" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.240459 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="136db455-41e9-435d-8628-2674862c3ec9" containerName="util" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.240703 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="136db455-41e9-435d-8628-2674862c3ec9" containerName="extract" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.241893 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.244478 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.244686 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.245215 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.245432 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.245604 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.245729 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.245858 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-jt976" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.246026 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.246124 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.260810 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309427 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309478 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309509 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309536 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309559 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309579 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309626 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309681 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/611dc6e0-3341-4770-ba1d-9f8e626fb414-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309712 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309831 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309859 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309906 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.309939 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.310012 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.310035 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411123 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/611dc6e0-3341-4770-ba1d-9f8e626fb414-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411205 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411240 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411280 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411306 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411336 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411367 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411390 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411431 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411456 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411489 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411530 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411564 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411590 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.411617 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.412286 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.413981 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.414099 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.415792 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.416149 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.416554 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.416697 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.416973 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/611dc6e0-3341-4770-ba1d-9f8e626fb414-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.421411 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.423048 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.423684 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.435677 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.435773 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/611dc6e0-3341-4770-ba1d-9f8e626fb414-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.435866 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.436201 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/611dc6e0-3341-4770-ba1d-9f8e626fb414-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"611dc6e0-3341-4770-ba1d-9f8e626fb414\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:53 crc kubenswrapper[4928]: I1122 00:18:53.562616 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.041009 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.041529 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.141525 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66"] Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.142590 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.150911 4928 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-frtgw" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.151352 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.151509 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.201079 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66"] Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.212863 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.249591 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-9tz66\" (UID: \"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.249660 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w926\" (UniqueName: \"kubernetes.io/projected/f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5-kube-api-access-9w926\") pod \"cert-manager-operator-controller-manager-5446d6888b-9tz66\" (UID: \"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.351377 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-9tz66\" (UID: \"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.351448 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w926\" (UniqueName: \"kubernetes.io/projected/f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5-kube-api-access-9w926\") pod \"cert-manager-operator-controller-manager-5446d6888b-9tz66\" (UID: \"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.352330 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-9tz66\" (UID: \"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.370702 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"611dc6e0-3341-4770-ba1d-9f8e626fb414","Type":"ContainerStarted","Data":"fc3bf5e98fa8c5cd4603094c83af97323a8c8b31e56645d9352b4ab44551227d"} Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.370816 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w926\" (UniqueName: \"kubernetes.io/projected/f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5-kube-api-access-9w926\") pod \"cert-manager-operator-controller-manager-5446d6888b-9tz66\" (UID: \"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.372276 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" event={"ID":"a04bec7c-e880-4b00-a112-23c3809fa0f6","Type":"ContainerStarted","Data":"f272e89a0e6bcd77407bc068ca88a281d25ccfa400b036fd912804da41faa659"} Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.395118 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-vcqqh" podStartSLOduration=10.35379402 podStartE2EDuration="22.395096357s" podCreationTimestamp="2025-11-22 00:18:33 +0000 UTC" firstStartedPulling="2025-11-22 00:18:42.832644111 +0000 UTC m=+758.120675302" lastFinishedPulling="2025-11-22 00:18:54.873946448 +0000 UTC m=+770.161977639" observedRunningTime="2025-11-22 00:18:55.388139122 +0000 UTC m=+770.676170323" watchObservedRunningTime="2025-11-22 00:18:55.395096357 +0000 UTC m=+770.683127548" Nov 22 00:18:55 crc kubenswrapper[4928]: I1122 00:18:55.460964 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" Nov 22 00:18:56 crc kubenswrapper[4928]: I1122 00:18:56.094556 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66"] Nov 22 00:18:56 crc kubenswrapper[4928]: W1122 00:18:56.107434 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf988bb18_31a2_4cd0_a9bc_9d2a2df2ffe5.slice/crio-9f781763628328ab412a54f425187f3b6516a130fb2ed4592565c57f740da665 WatchSource:0}: Error finding container 9f781763628328ab412a54f425187f3b6516a130fb2ed4592565c57f740da665: Status 404 returned error can't find the container with id 9f781763628328ab412a54f425187f3b6516a130fb2ed4592565c57f740da665 Nov 22 00:18:56 crc kubenswrapper[4928]: I1122 00:18:56.109767 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zwl4z" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="registry-server" probeResult="failure" output=< Nov 22 00:18:56 crc kubenswrapper[4928]: timeout: failed to connect service ":50051" within 1s Nov 22 00:18:56 crc kubenswrapper[4928]: > Nov 22 00:18:56 crc kubenswrapper[4928]: I1122 00:18:56.381220 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" event={"ID":"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5","Type":"ContainerStarted","Data":"9f781763628328ab412a54f425187f3b6516a130fb2ed4592565c57f740da665"} Nov 22 00:18:56 crc kubenswrapper[4928]: I1122 00:18:56.518389 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-w6578" Nov 22 00:19:05 crc kubenswrapper[4928]: I1122 00:19:05.082695 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:19:05 crc kubenswrapper[4928]: I1122 00:19:05.133105 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:19:05 crc kubenswrapper[4928]: I1122 00:19:05.720442 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwl4z"] Nov 22 00:19:06 crc kubenswrapper[4928]: I1122 00:19:06.462501 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zwl4z" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="registry-server" containerID="cri-o://dd0ebf188a58683af5b0f71dff08ac5265941d8e08cb732b716db84cefbb1990" gracePeriod=2 Nov 22 00:19:06 crc kubenswrapper[4928]: I1122 00:19:06.707184 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:19:06 crc kubenswrapper[4928]: I1122 00:19:06.707246 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:19:07 crc kubenswrapper[4928]: I1122 00:19:07.470395 4928 generic.go:334] "Generic (PLEG): container finished" podID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerID="dd0ebf188a58683af5b0f71dff08ac5265941d8e08cb732b716db84cefbb1990" exitCode=0 Nov 22 00:19:07 crc kubenswrapper[4928]: I1122 00:19:07.470443 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwl4z" event={"ID":"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d","Type":"ContainerDied","Data":"dd0ebf188a58683af5b0f71dff08ac5265941d8e08cb732b716db84cefbb1990"} Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.693308 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.708412 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-utilities\") pod \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.708519 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dks92\" (UniqueName: \"kubernetes.io/projected/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-kube-api-access-dks92\") pod \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.710186 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-catalog-content\") pod \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\" (UID: \"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d\") " Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.751190 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-kube-api-access-dks92" (OuterVolumeSpecName: "kube-api-access-dks92") pod "ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" (UID: "ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d"). InnerVolumeSpecName "kube-api-access-dks92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.751477 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-utilities" (OuterVolumeSpecName: "utilities") pod "ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" (UID: "ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.811525 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.811767 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dks92\" (UniqueName: \"kubernetes.io/projected/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-kube-api-access-dks92\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.816676 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" (UID: "ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:19:13 crc kubenswrapper[4928]: I1122 00:19:13.912503 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:14 crc kubenswrapper[4928]: E1122 00:19:14.058054 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Nov 22 00:19:14 crc kubenswrapper[4928]: E1122 00:19:14.058279 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(611dc6e0-3341-4770-ba1d-9f8e626fb414): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 00:19:14 crc kubenswrapper[4928]: E1122 00:19:14.059692 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="611dc6e0-3341-4770-ba1d-9f8e626fb414" Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.526027 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" event={"ID":"f988bb18-31a2-4cd0-a9bc-9d2a2df2ffe5","Type":"ContainerStarted","Data":"21e41061f7baf82a4108bf689f45246c1895b8e5a10ef8e4f5ac1b29b82b4753"} Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.528667 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwl4z" Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.528766 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwl4z" event={"ID":"ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d","Type":"ContainerDied","Data":"a44d1e2aab7c3b30aaff1afbd2f8f62cd974e24e28268435583ce2b357ceb807"} Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.528867 4928 scope.go:117] "RemoveContainer" containerID="dd0ebf188a58683af5b0f71dff08ac5265941d8e08cb732b716db84cefbb1990" Nov 22 00:19:14 crc kubenswrapper[4928]: E1122 00:19:14.530160 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="611dc6e0-3341-4770-ba1d-9f8e626fb414" Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.549187 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-9tz66" podStartSLOduration=1.794719027 podStartE2EDuration="19.549169329s" podCreationTimestamp="2025-11-22 00:18:55 +0000 UTC" firstStartedPulling="2025-11-22 00:18:56.111515413 +0000 UTC m=+771.399546604" lastFinishedPulling="2025-11-22 00:19:13.865965715 +0000 UTC m=+789.153996906" observedRunningTime="2025-11-22 00:19:14.547086433 +0000 UTC m=+789.835117624" watchObservedRunningTime="2025-11-22 00:19:14.549169329 +0000 UTC m=+789.837200530" Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.559264 4928 scope.go:117] "RemoveContainer" containerID="2844bc9c9a1156bf625ba9cc7d2f78190f162a64cb0624883ff1ee0d76c0f134" Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.593617 4928 scope.go:117] "RemoveContainer" containerID="b26782331d1e1e5ab14e7cd97f7d088a326a3647fc9b389b7a10a4c7f70305fa" Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.615595 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwl4z"] Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.620907 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zwl4z"] Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.692265 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 22 00:19:14 crc kubenswrapper[4928]: I1122 00:19:14.721197 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 22 00:19:15 crc kubenswrapper[4928]: E1122 00:19:15.538007 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="611dc6e0-3341-4770-ba1d-9f8e626fb414" Nov 22 00:19:15 crc kubenswrapper[4928]: I1122 00:19:15.624071 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" path="/var/lib/kubelet/pods/ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d/volumes" Nov 22 00:19:16 crc kubenswrapper[4928]: E1122 00:19:16.542260 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="611dc6e0-3341-4770-ba1d-9f8e626fb414" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.001724 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-92wzm"] Nov 22 00:19:18 crc kubenswrapper[4928]: E1122 00:19:18.002027 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="extract-content" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.002042 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="extract-content" Nov 22 00:19:18 crc kubenswrapper[4928]: E1122 00:19:18.002054 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="extract-utilities" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.002062 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="extract-utilities" Nov 22 00:19:18 crc kubenswrapper[4928]: E1122 00:19:18.002085 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="registry-server" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.002092 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="registry-server" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.002206 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba620d9b-e9c7-42ca-8cf8-b1eb6728a22d" containerName="registry-server" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.002665 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.005041 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.005058 4928 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mvsk8" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.005313 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.015787 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-92wzm"] Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.057393 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88k55\" (UniqueName: \"kubernetes.io/projected/8f2f9770-1e25-466d-853b-04fcf1ec872a-kube-api-access-88k55\") pod \"cert-manager-webhook-f4fb5df64-92wzm\" (UID: \"8f2f9770-1e25-466d-853b-04fcf1ec872a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.057497 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f2f9770-1e25-466d-853b-04fcf1ec872a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-92wzm\" (UID: \"8f2f9770-1e25-466d-853b-04fcf1ec872a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.158290 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f2f9770-1e25-466d-853b-04fcf1ec872a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-92wzm\" (UID: \"8f2f9770-1e25-466d-853b-04fcf1ec872a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.158361 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88k55\" (UniqueName: \"kubernetes.io/projected/8f2f9770-1e25-466d-853b-04fcf1ec872a-kube-api-access-88k55\") pod \"cert-manager-webhook-f4fb5df64-92wzm\" (UID: \"8f2f9770-1e25-466d-853b-04fcf1ec872a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.187741 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88k55\" (UniqueName: \"kubernetes.io/projected/8f2f9770-1e25-466d-853b-04fcf1ec872a-kube-api-access-88k55\") pod \"cert-manager-webhook-f4fb5df64-92wzm\" (UID: \"8f2f9770-1e25-466d-853b-04fcf1ec872a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.201848 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f2f9770-1e25-466d-853b-04fcf1ec872a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-92wzm\" (UID: \"8f2f9770-1e25-466d-853b-04fcf1ec872a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.323216 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:18 crc kubenswrapper[4928]: I1122 00:19:18.766159 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-92wzm"] Nov 22 00:19:18 crc kubenswrapper[4928]: W1122 00:19:18.769500 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f2f9770_1e25_466d_853b_04fcf1ec872a.slice/crio-5635c828eb4326d4af5790fa04e7a72022373470a79dc171da27cf09250317f6 WatchSource:0}: Error finding container 5635c828eb4326d4af5790fa04e7a72022373470a79dc171da27cf09250317f6: Status 404 returned error can't find the container with id 5635c828eb4326d4af5790fa04e7a72022373470a79dc171da27cf09250317f6 Nov 22 00:19:19 crc kubenswrapper[4928]: I1122 00:19:19.554110 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" event={"ID":"8f2f9770-1e25-466d-853b-04fcf1ec872a","Type":"ContainerStarted","Data":"5635c828eb4326d4af5790fa04e7a72022373470a79dc171da27cf09250317f6"} Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.278880 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.280063 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.281848 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.281849 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.282109 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.283792 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.296713 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438475 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smt88\" (UniqueName: \"kubernetes.io/projected/01b3dffb-f75f-4574-9bc6-973050fceb66-kube-api-access-smt88\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438532 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438571 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438593 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438614 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438638 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438656 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438675 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-push\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438696 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438716 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438735 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.438757 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.539785 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-push\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.539855 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.539878 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.539900 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.539931 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.539954 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smt88\" (UniqueName: \"kubernetes.io/projected/01b3dffb-f75f-4574-9bc6-973050fceb66-kube-api-access-smt88\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.539971 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540008 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540029 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540045 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540073 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540091 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540474 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540953 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.540962 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.541073 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.541300 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.541455 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.541522 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.541673 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.542381 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.545031 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-push\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.545694 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.557032 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smt88\" (UniqueName: \"kubernetes.io/projected/01b3dffb-f75f-4574-9bc6-973050fceb66-kube-api-access-smt88\") pod \"service-telemetry-operator-1-build\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:23 crc kubenswrapper[4928]: I1122 00:19:23.643307 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.546755 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jpw42"] Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.547689 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.549275 4928 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5xb8w" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.551203 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jpw42"] Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.658844 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvvj6\" (UniqueName: \"kubernetes.io/projected/c2248cb6-0cd4-404e-bb44-b6736ec6149b-kube-api-access-zvvj6\") pod \"cert-manager-cainjector-855d9ccff4-jpw42\" (UID: \"c2248cb6-0cd4-404e-bb44-b6736ec6149b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.658961 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2248cb6-0cd4-404e-bb44-b6736ec6149b-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jpw42\" (UID: \"c2248cb6-0cd4-404e-bb44-b6736ec6149b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.760667 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2248cb6-0cd4-404e-bb44-b6736ec6149b-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jpw42\" (UID: \"c2248cb6-0cd4-404e-bb44-b6736ec6149b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.760718 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvvj6\" (UniqueName: \"kubernetes.io/projected/c2248cb6-0cd4-404e-bb44-b6736ec6149b-kube-api-access-zvvj6\") pod \"cert-manager-cainjector-855d9ccff4-jpw42\" (UID: \"c2248cb6-0cd4-404e-bb44-b6736ec6149b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.787696 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2248cb6-0cd4-404e-bb44-b6736ec6149b-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jpw42\" (UID: \"c2248cb6-0cd4-404e-bb44-b6736ec6149b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.796599 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvvj6\" (UniqueName: \"kubernetes.io/projected/c2248cb6-0cd4-404e-bb44-b6736ec6149b-kube-api-access-zvvj6\") pod \"cert-manager-cainjector-855d9ccff4-jpw42\" (UID: \"c2248cb6-0cd4-404e-bb44-b6736ec6149b\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:24 crc kubenswrapper[4928]: I1122 00:19:24.869650 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" Nov 22 00:19:26 crc kubenswrapper[4928]: I1122 00:19:26.628621 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jpw42"] Nov 22 00:19:26 crc kubenswrapper[4928]: W1122 00:19:26.637514 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2248cb6_0cd4_404e_bb44_b6736ec6149b.slice/crio-ba36b87aef35d7dea439ab0a6aeeefca90857da7bf002a66490ca773b5d057c3 WatchSource:0}: Error finding container ba36b87aef35d7dea439ab0a6aeeefca90857da7bf002a66490ca773b5d057c3: Status 404 returned error can't find the container with id ba36b87aef35d7dea439ab0a6aeeefca90857da7bf002a66490ca773b5d057c3 Nov 22 00:19:26 crc kubenswrapper[4928]: I1122 00:19:26.742419 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 22 00:19:26 crc kubenswrapper[4928]: W1122 00:19:26.750275 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b3dffb_f75f_4574_9bc6_973050fceb66.slice/crio-7e7c9bc41b5d99b0f0e9226686285c3c9e03901de4c21ecb4524a647dc99f3da WatchSource:0}: Error finding container 7e7c9bc41b5d99b0f0e9226686285c3c9e03901de4c21ecb4524a647dc99f3da: Status 404 returned error can't find the container with id 7e7c9bc41b5d99b0f0e9226686285c3c9e03901de4c21ecb4524a647dc99f3da Nov 22 00:19:27 crc kubenswrapper[4928]: I1122 00:19:27.612630 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" event={"ID":"c2248cb6-0cd4-404e-bb44-b6736ec6149b","Type":"ContainerStarted","Data":"1b79f33eba6cb0e05daf5cbb41455a2b6550581eeb909597e3190fb0b4d974e4"} Nov 22 00:19:27 crc kubenswrapper[4928]: I1122 00:19:27.612689 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" event={"ID":"c2248cb6-0cd4-404e-bb44-b6736ec6149b","Type":"ContainerStarted","Data":"ba36b87aef35d7dea439ab0a6aeeefca90857da7bf002a66490ca773b5d057c3"} Nov 22 00:19:27 crc kubenswrapper[4928]: I1122 00:19:27.614927 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" event={"ID":"8f2f9770-1e25-466d-853b-04fcf1ec872a","Type":"ContainerStarted","Data":"87dd4ad5263987796998debc95e53bbcd99cc3d54e8371d8c7857e38f24c1e1f"} Nov 22 00:19:27 crc kubenswrapper[4928]: I1122 00:19:27.615078 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:27 crc kubenswrapper[4928]: I1122 00:19:27.616818 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"01b3dffb-f75f-4574-9bc6-973050fceb66","Type":"ContainerStarted","Data":"7e7c9bc41b5d99b0f0e9226686285c3c9e03901de4c21ecb4524a647dc99f3da"} Nov 22 00:19:27 crc kubenswrapper[4928]: I1122 00:19:27.637109 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jpw42" podStartSLOduration=3.637064906 podStartE2EDuration="3.637064906s" podCreationTimestamp="2025-11-22 00:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:27.630428506 +0000 UTC m=+802.918459697" watchObservedRunningTime="2025-11-22 00:19:27.637064906 +0000 UTC m=+802.925096107" Nov 22 00:19:33 crc kubenswrapper[4928]: I1122 00:19:33.327417 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" Nov 22 00:19:33 crc kubenswrapper[4928]: I1122 00:19:33.350050 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-92wzm" podStartSLOduration=8.66914794 podStartE2EDuration="16.350013869s" podCreationTimestamp="2025-11-22 00:19:17 +0000 UTC" firstStartedPulling="2025-11-22 00:19:18.771965064 +0000 UTC m=+794.059996255" lastFinishedPulling="2025-11-22 00:19:26.452830993 +0000 UTC m=+801.740862184" observedRunningTime="2025-11-22 00:19:27.661630071 +0000 UTC m=+802.949661262" watchObservedRunningTime="2025-11-22 00:19:33.350013869 +0000 UTC m=+808.638045100" Nov 22 00:19:33 crc kubenswrapper[4928]: I1122 00:19:33.482695 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.153272 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.154942 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.157943 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.158173 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.158403 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.171124 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.325828 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-push\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.325894 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326015 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326073 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326112 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326135 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326163 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxrz\" (UniqueName: \"kubernetes.io/projected/fe966661-429e-44de-8e6a-7ff2aa20dc93-kube-api-access-mpxrz\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326204 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326255 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326322 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326358 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.326393 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427380 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427434 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427457 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427481 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxrz\" (UniqueName: \"kubernetes.io/projected/fe966661-429e-44de-8e6a-7ff2aa20dc93-kube-api-access-mpxrz\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427502 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427536 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427637 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.427574 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428224 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428440 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428681 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428688 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428750 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428785 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428859 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-push\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428863 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428879 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.428939 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.429071 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.429293 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.429425 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.435284 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-push\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.436327 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.445953 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxrz\" (UniqueName: \"kubernetes.io/projected/fe966661-429e-44de-8e6a-7ff2aa20dc93-kube-api-access-mpxrz\") pod \"service-telemetry-operator-2-build\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.488281 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.990891 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ft6w9"] Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.991858 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:35 crc kubenswrapper[4928]: I1122 00:19:35.994538 4928 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t4vqm" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.010457 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ft6w9"] Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.135943 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48v5n\" (UniqueName: \"kubernetes.io/projected/0042c8ac-21bd-4461-b86d-2d35d8135b42-kube-api-access-48v5n\") pod \"cert-manager-86cb77c54b-ft6w9\" (UID: \"0042c8ac-21bd-4461-b86d-2d35d8135b42\") " pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.136014 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0042c8ac-21bd-4461-b86d-2d35d8135b42-bound-sa-token\") pod \"cert-manager-86cb77c54b-ft6w9\" (UID: \"0042c8ac-21bd-4461-b86d-2d35d8135b42\") " pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.237666 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0042c8ac-21bd-4461-b86d-2d35d8135b42-bound-sa-token\") pod \"cert-manager-86cb77c54b-ft6w9\" (UID: \"0042c8ac-21bd-4461-b86d-2d35d8135b42\") " pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.238073 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48v5n\" (UniqueName: \"kubernetes.io/projected/0042c8ac-21bd-4461-b86d-2d35d8135b42-kube-api-access-48v5n\") pod \"cert-manager-86cb77c54b-ft6w9\" (UID: \"0042c8ac-21bd-4461-b86d-2d35d8135b42\") " pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.253738 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48v5n\" (UniqueName: \"kubernetes.io/projected/0042c8ac-21bd-4461-b86d-2d35d8135b42-kube-api-access-48v5n\") pod \"cert-manager-86cb77c54b-ft6w9\" (UID: \"0042c8ac-21bd-4461-b86d-2d35d8135b42\") " pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.254647 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0042c8ac-21bd-4461-b86d-2d35d8135b42-bound-sa-token\") pod \"cert-manager-86cb77c54b-ft6w9\" (UID: \"0042c8ac-21bd-4461-b86d-2d35d8135b42\") " pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.309423 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-ft6w9" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.707160 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.707237 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.707294 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.708210 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32211f58683787fb757bd9a4fc769db246d8d45c2afe49257b9494ed6fed87fd"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.708312 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://32211f58683787fb757bd9a4fc769db246d8d45c2afe49257b9494ed6fed87fd" gracePeriod=600 Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.885012 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="32211f58683787fb757bd9a4fc769db246d8d45c2afe49257b9494ed6fed87fd" exitCode=0 Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.885059 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"32211f58683787fb757bd9a4fc769db246d8d45c2afe49257b9494ed6fed87fd"} Nov 22 00:19:36 crc kubenswrapper[4928]: I1122 00:19:36.885095 4928 scope.go:117] "RemoveContainer" containerID="90a2ab314eb110db4bd0a1773ada11cfb3889716012f40d2cc439313ab3736d0" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.013698 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 22 00:19:39 crc kubenswrapper[4928]: W1122 00:19:39.017813 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe966661_429e_44de_8e6a_7ff2aa20dc93.slice/crio-699f50f670d2b990157ee3f1cc38a1c4fd883a4165a9c56266d28bc2eac826bc WatchSource:0}: Error finding container 699f50f670d2b990157ee3f1cc38a1c4fd883a4165a9c56266d28bc2eac826bc: Status 404 returned error can't find the container with id 699f50f670d2b990157ee3f1cc38a1c4fd883a4165a9c56266d28bc2eac826bc Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.059979 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ft6w9"] Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.716130 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gh4xn"] Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.718953 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.726326 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gh4xn"] Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.787665 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-utilities\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.787758 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-catalog-content\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.787780 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgp4\" (UniqueName: \"kubernetes.io/projected/4678775e-476c-4b4c-8832-0715d40f3c13-kube-api-access-nlgp4\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.888685 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-utilities\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.888882 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-catalog-content\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.888924 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgp4\" (UniqueName: \"kubernetes.io/projected/4678775e-476c-4b4c-8832-0715d40f3c13-kube-api-access-nlgp4\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.889705 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-utilities\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.889728 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-catalog-content\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.902451 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"611dc6e0-3341-4770-ba1d-9f8e626fb414","Type":"ContainerStarted","Data":"07859dfc9079dd09e28ded48a9a0fa6a83a9902160f5167e51885ba20f2fba83"} Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.904003 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fe966661-429e-44de-8e6a-7ff2aa20dc93","Type":"ContainerStarted","Data":"793c297ac195aba5069366a9fa358b1df98d610330821e63db8d8a42940a0b71"} Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.904058 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fe966661-429e-44de-8e6a-7ff2aa20dc93","Type":"ContainerStarted","Data":"699f50f670d2b990157ee3f1cc38a1c4fd883a4165a9c56266d28bc2eac826bc"} Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.905908 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"ae127a951fcea164fc2fff2b44252ffdfd5172caf45047681eadba5c13f25fb2"} Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.907470 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"01b3dffb-f75f-4574-9bc6-973050fceb66","Type":"ContainerStarted","Data":"d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648"} Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.907581 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="01b3dffb-f75f-4574-9bc6-973050fceb66" containerName="manage-dockerfile" containerID="cri-o://d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648" gracePeriod=30 Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.909416 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-ft6w9" event={"ID":"0042c8ac-21bd-4461-b86d-2d35d8135b42","Type":"ContainerStarted","Data":"c63af20fa2de396ed2cd4ee652d4398c6940eb82415b34bfd852b2c741b8f7a9"} Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.909456 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-ft6w9" event={"ID":"0042c8ac-21bd-4461-b86d-2d35d8135b42","Type":"ContainerStarted","Data":"dc88d977ebaa199f221e8d7809312ee18e1f733be6005cb8fdb2d17d46280a67"} Nov 22 00:19:39 crc kubenswrapper[4928]: I1122 00:19:39.921719 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgp4\" (UniqueName: \"kubernetes.io/projected/4678775e-476c-4b4c-8832-0715d40f3c13-kube-api-access-nlgp4\") pod \"community-operators-gh4xn\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.021311 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-ft6w9" podStartSLOduration=5.021288305 podStartE2EDuration="5.021288305s" podCreationTimestamp="2025-11-22 00:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:40.011466579 +0000 UTC m=+815.299497790" watchObservedRunningTime="2025-11-22 00:19:40.021288305 +0000 UTC m=+815.309319496" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.087414 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.444124 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_01b3dffb-f75f-4574-9bc6-973050fceb66/manage-dockerfile/0.log" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.444455 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.500467 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-run\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.500809 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-proxy-ca-bundles\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.500830 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-system-configs\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.500875 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-buildcachedir\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.500897 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-build-blob-cache\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.500938 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.500967 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501027 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-node-pullsecrets\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501070 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-ca-bundles\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501126 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501302 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501571 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501641 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501755 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501855 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-pull\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.501876 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-buildworkdir\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.502506 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503095 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smt88\" (UniqueName: \"kubernetes.io/projected/01b3dffb-f75f-4574-9bc6-973050fceb66-kube-api-access-smt88\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503139 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-push\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503154 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-root\") pod \"01b3dffb-f75f-4574-9bc6-973050fceb66\" (UID: \"01b3dffb-f75f-4574-9bc6-973050fceb66\") " Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503498 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503519 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503531 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503542 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503553 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503565 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503576 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01b3dffb-f75f-4574-9bc6-973050fceb66-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503586 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01b3dffb-f75f-4574-9bc6-973050fceb66-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.503583 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.506815 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.507012 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.507270 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b3dffb-f75f-4574-9bc6-973050fceb66-kube-api-access-smt88" (OuterVolumeSpecName: "kube-api-access-smt88") pod "01b3dffb-f75f-4574-9bc6-973050fceb66" (UID: "01b3dffb-f75f-4574-9bc6-973050fceb66"). InnerVolumeSpecName "kube-api-access-smt88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.604218 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.604248 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smt88\" (UniqueName: \"kubernetes.io/projected/01b3dffb-f75f-4574-9bc6-973050fceb66-kube-api-access-smt88\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.604262 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/01b3dffb-f75f-4574-9bc6-973050fceb66-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.604276 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/01b3dffb-f75f-4574-9bc6-973050fceb66-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.654292 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gh4xn"] Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.918173 4928 generic.go:334] "Generic (PLEG): container finished" podID="611dc6e0-3341-4770-ba1d-9f8e626fb414" containerID="07859dfc9079dd09e28ded48a9a0fa6a83a9902160f5167e51885ba20f2fba83" exitCode=0 Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.918230 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"611dc6e0-3341-4770-ba1d-9f8e626fb414","Type":"ContainerDied","Data":"07859dfc9079dd09e28ded48a9a0fa6a83a9902160f5167e51885ba20f2fba83"} Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.920648 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_01b3dffb-f75f-4574-9bc6-973050fceb66/manage-dockerfile/0.log" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.920702 4928 generic.go:334] "Generic (PLEG): container finished" podID="01b3dffb-f75f-4574-9bc6-973050fceb66" containerID="d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648" exitCode=1 Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.920757 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.920770 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"01b3dffb-f75f-4574-9bc6-973050fceb66","Type":"ContainerDied","Data":"d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648"} Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.920845 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"01b3dffb-f75f-4574-9bc6-973050fceb66","Type":"ContainerDied","Data":"7e7c9bc41b5d99b0f0e9226686285c3c9e03901de4c21ecb4524a647dc99f3da"} Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.920868 4928 scope.go:117] "RemoveContainer" containerID="d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.925276 4928 generic.go:334] "Generic (PLEG): container finished" podID="4678775e-476c-4b4c-8832-0715d40f3c13" containerID="a9a7a05dc57fbfa6dc2ca4979db77bed1da1692247106539af0b6d6346165bc7" exitCode=0 Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.925386 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh4xn" event={"ID":"4678775e-476c-4b4c-8832-0715d40f3c13","Type":"ContainerDied","Data":"a9a7a05dc57fbfa6dc2ca4979db77bed1da1692247106539af0b6d6346165bc7"} Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.925440 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh4xn" event={"ID":"4678775e-476c-4b4c-8832-0715d40f3c13","Type":"ContainerStarted","Data":"f470758d0b989a0359cc2492cf41de89fae94614d9e2de2c6ffce909f4fbd9fe"} Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.940152 4928 scope.go:117] "RemoveContainer" containerID="d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648" Nov 22 00:19:40 crc kubenswrapper[4928]: E1122 00:19:40.940945 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648\": container with ID starting with d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648 not found: ID does not exist" containerID="d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.940982 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648"} err="failed to get container status \"d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648\": rpc error: code = NotFound desc = could not find container \"d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648\": container with ID starting with d25fc4d5adfe1700ef978be48468f405361048004b8600be54dad41bd92c4648 not found: ID does not exist" Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.964124 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 22 00:19:40 crc kubenswrapper[4928]: I1122 00:19:40.976658 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 22 00:19:41 crc kubenswrapper[4928]: I1122 00:19:41.628664 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b3dffb-f75f-4574-9bc6-973050fceb66" path="/var/lib/kubelet/pods/01b3dffb-f75f-4574-9bc6-973050fceb66/volumes" Nov 22 00:19:41 crc kubenswrapper[4928]: I1122 00:19:41.935415 4928 generic.go:334] "Generic (PLEG): container finished" podID="611dc6e0-3341-4770-ba1d-9f8e626fb414" containerID="e9756605dbaf3758540f9f3afe2fb42cf90dd0dbc5f9693dce9ada1e12019e1f" exitCode=0 Nov 22 00:19:41 crc kubenswrapper[4928]: I1122 00:19:41.935812 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"611dc6e0-3341-4770-ba1d-9f8e626fb414","Type":"ContainerDied","Data":"e9756605dbaf3758540f9f3afe2fb42cf90dd0dbc5f9693dce9ada1e12019e1f"} Nov 22 00:19:42 crc kubenswrapper[4928]: I1122 00:19:42.947707 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"611dc6e0-3341-4770-ba1d-9f8e626fb414","Type":"ContainerStarted","Data":"6f2b92c31a3ea6ceb18c185fdb17c4bb13888db7cdd9b51c0ae2ad1f207e9560"} Nov 22 00:19:42 crc kubenswrapper[4928]: I1122 00:19:42.947916 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:19:42 crc kubenswrapper[4928]: I1122 00:19:42.949480 4928 generic.go:334] "Generic (PLEG): container finished" podID="4678775e-476c-4b4c-8832-0715d40f3c13" containerID="e511a8758fedd800752b1331faa35f3f6d2097e6bd6b246900f7f3d11926b512" exitCode=0 Nov 22 00:19:42 crc kubenswrapper[4928]: I1122 00:19:42.949528 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh4xn" event={"ID":"4678775e-476c-4b4c-8832-0715d40f3c13","Type":"ContainerDied","Data":"e511a8758fedd800752b1331faa35f3f6d2097e6bd6b246900f7f3d11926b512"} Nov 22 00:19:42 crc kubenswrapper[4928]: I1122 00:19:42.983357 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.636485578 podStartE2EDuration="49.983337195s" podCreationTimestamp="2025-11-22 00:18:53 +0000 UTC" firstStartedPulling="2025-11-22 00:18:55.216989417 +0000 UTC m=+770.505020608" lastFinishedPulling="2025-11-22 00:19:38.563841034 +0000 UTC m=+813.851872225" observedRunningTime="2025-11-22 00:19:42.977480497 +0000 UTC m=+818.265511688" watchObservedRunningTime="2025-11-22 00:19:42.983337195 +0000 UTC m=+818.271368396" Nov 22 00:19:43 crc kubenswrapper[4928]: I1122 00:19:43.958034 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh4xn" event={"ID":"4678775e-476c-4b4c-8832-0715d40f3c13","Type":"ContainerStarted","Data":"00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca"} Nov 22 00:19:43 crc kubenswrapper[4928]: I1122 00:19:43.981279 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gh4xn" podStartSLOduration=2.295040713 podStartE2EDuration="4.981259363s" podCreationTimestamp="2025-11-22 00:19:39 +0000 UTC" firstStartedPulling="2025-11-22 00:19:40.926939813 +0000 UTC m=+816.214971014" lastFinishedPulling="2025-11-22 00:19:43.613158473 +0000 UTC m=+818.901189664" observedRunningTime="2025-11-22 00:19:43.974506769 +0000 UTC m=+819.262537960" watchObservedRunningTime="2025-11-22 00:19:43.981259363 +0000 UTC m=+819.269290554" Nov 22 00:19:46 crc kubenswrapper[4928]: I1122 00:19:46.982657 4928 generic.go:334] "Generic (PLEG): container finished" podID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerID="793c297ac195aba5069366a9fa358b1df98d610330821e63db8d8a42940a0b71" exitCode=0 Nov 22 00:19:46 crc kubenswrapper[4928]: I1122 00:19:46.982897 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fe966661-429e-44de-8e6a-7ff2aa20dc93","Type":"ContainerDied","Data":"793c297ac195aba5069366a9fa358b1df98d610330821e63db8d8a42940a0b71"} Nov 22 00:19:50 crc kubenswrapper[4928]: I1122 00:19:50.088378 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:50 crc kubenswrapper[4928]: I1122 00:19:50.089405 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:50 crc kubenswrapper[4928]: I1122 00:19:50.131781 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:51 crc kubenswrapper[4928]: I1122 00:19:51.044640 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:19:51 crc kubenswrapper[4928]: I1122 00:19:51.087923 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gh4xn"] Nov 22 00:19:52 crc kubenswrapper[4928]: I1122 00:19:52.011621 4928 generic.go:334] "Generic (PLEG): container finished" podID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerID="d25819e37e8b630731225189b7b0e029a3e4b110ba9d114e21e04d5a7845369c" exitCode=0 Nov 22 00:19:52 crc kubenswrapper[4928]: I1122 00:19:52.011714 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fe966661-429e-44de-8e6a-7ff2aa20dc93","Type":"ContainerDied","Data":"d25819e37e8b630731225189b7b0e029a3e4b110ba9d114e21e04d5a7845369c"} Nov 22 00:19:52 crc kubenswrapper[4928]: I1122 00:19:52.047074 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_fe966661-429e-44de-8e6a-7ff2aa20dc93/manage-dockerfile/0.log" Nov 22 00:19:53 crc kubenswrapper[4928]: I1122 00:19:53.022151 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fe966661-429e-44de-8e6a-7ff2aa20dc93","Type":"ContainerStarted","Data":"eff25d78d96fbd319bbbea45100a39b8018bfb35e1577a5c6931fb3f551f4caa"} Nov 22 00:19:53 crc kubenswrapper[4928]: I1122 00:19:53.022304 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gh4xn" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="registry-server" containerID="cri-o://00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca" gracePeriod=2 Nov 22 00:19:53 crc kubenswrapper[4928]: I1122 00:19:53.057452 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=18.05743037 podStartE2EDuration="18.05743037s" podCreationTimestamp="2025-11-22 00:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:53.051808907 +0000 UTC m=+828.339840098" watchObservedRunningTime="2025-11-22 00:19:53.05743037 +0000 UTC m=+828.345461561" Nov 22 00:19:53 crc kubenswrapper[4928]: I1122 00:19:53.675370 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="611dc6e0-3341-4770-ba1d-9f8e626fb414" containerName="elasticsearch" probeResult="failure" output=< Nov 22 00:19:53 crc kubenswrapper[4928]: {"timestamp": "2025-11-22T00:19:53+00:00", "message": "readiness probe failed", "curl_rc": "7"} Nov 22 00:19:53 crc kubenswrapper[4928]: > Nov 22 00:19:57 crc kubenswrapper[4928]: I1122 00:19:57.067207 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gh4xn_4678775e-476c-4b4c-8832-0715d40f3c13/registry-server/0.log" Nov 22 00:19:57 crc kubenswrapper[4928]: I1122 00:19:57.069147 4928 generic.go:334] "Generic (PLEG): container finished" podID="4678775e-476c-4b4c-8832-0715d40f3c13" containerID="00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca" exitCode=137 Nov 22 00:19:57 crc kubenswrapper[4928]: I1122 00:19:57.069254 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh4xn" event={"ID":"4678775e-476c-4b4c-8832-0715d40f3c13","Type":"ContainerDied","Data":"00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca"} Nov 22 00:19:58 crc kubenswrapper[4928]: I1122 00:19:58.627384 4928 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="611dc6e0-3341-4770-ba1d-9f8e626fb414" containerName="elasticsearch" probeResult="failure" output=< Nov 22 00:19:58 crc kubenswrapper[4928]: {"timestamp": "2025-11-22T00:19:58+00:00", "message": "readiness probe failed", "curl_rc": "7"} Nov 22 00:19:58 crc kubenswrapper[4928]: > Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.104001 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jwqt5"] Nov 22 00:19:59 crc kubenswrapper[4928]: E1122 00:19:59.104281 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b3dffb-f75f-4574-9bc6-973050fceb66" containerName="manage-dockerfile" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.104295 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b3dffb-f75f-4574-9bc6-973050fceb66" containerName="manage-dockerfile" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.104403 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b3dffb-f75f-4574-9bc6-973050fceb66" containerName="manage-dockerfile" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.105423 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.115148 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jwqt5"] Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.150158 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvcl\" (UniqueName: \"kubernetes.io/projected/ef029927-44f9-4500-a0f1-8b6215da9736-kube-api-access-7mvcl\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.150302 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-utilities\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.150341 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-catalog-content\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.251715 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvcl\" (UniqueName: \"kubernetes.io/projected/ef029927-44f9-4500-a0f1-8b6215da9736-kube-api-access-7mvcl\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.252037 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-utilities\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.252137 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-catalog-content\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.252528 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-utilities\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.252554 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-catalog-content\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.271208 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvcl\" (UniqueName: \"kubernetes.io/projected/ef029927-44f9-4500-a0f1-8b6215da9736-kube-api-access-7mvcl\") pod \"certified-operators-jwqt5\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:19:59 crc kubenswrapper[4928]: I1122 00:19:59.421604 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:20:00 crc kubenswrapper[4928]: E1122 00:20:00.097215 4928 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca is running failed: container process not found" containerID="00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 00:20:00 crc kubenswrapper[4928]: E1122 00:20:00.097690 4928 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca is running failed: container process not found" containerID="00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 00:20:00 crc kubenswrapper[4928]: E1122 00:20:00.098074 4928 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca is running failed: container process not found" containerID="00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 00:20:00 crc kubenswrapper[4928]: E1122 00:20:00.098397 4928 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-gh4xn" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="registry-server" Nov 22 00:20:00 crc kubenswrapper[4928]: I1122 00:20:00.865915 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jwqt5"] Nov 22 00:20:00 crc kubenswrapper[4928]: W1122 00:20:00.867899 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef029927_44f9_4500_a0f1_8b6215da9736.slice/crio-43e873d965a728bbb4db9edcdfcc47e4499263229c140ed22b9b4862568c22c8 WatchSource:0}: Error finding container 43e873d965a728bbb4db9edcdfcc47e4499263229c140ed22b9b4862568c22c8: Status 404 returned error can't find the container with id 43e873d965a728bbb4db9edcdfcc47e4499263229c140ed22b9b4862568c22c8 Nov 22 00:20:01 crc kubenswrapper[4928]: I1122 00:20:01.089366 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwqt5" event={"ID":"ef029927-44f9-4500-a0f1-8b6215da9736","Type":"ContainerStarted","Data":"43e873d965a728bbb4db9edcdfcc47e4499263229c140ed22b9b4862568c22c8"} Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.100674 4928 generic.go:334] "Generic (PLEG): container finished" podID="ef029927-44f9-4500-a0f1-8b6215da9736" containerID="d275c558cf980103f2383e83fb89d670bf7f177b7c237fe65c541bc422397140" exitCode=0 Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.101131 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwqt5" event={"ID":"ef029927-44f9-4500-a0f1-8b6215da9736","Type":"ContainerDied","Data":"d275c558cf980103f2383e83fb89d670bf7f177b7c237fe65c541bc422397140"} Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.197074 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gh4xn_4678775e-476c-4b4c-8832-0715d40f3c13/registry-server/0.log" Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.198615 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.296774 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-utilities\") pod \"4678775e-476c-4b4c-8832-0715d40f3c13\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.296851 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-catalog-content\") pod \"4678775e-476c-4b4c-8832-0715d40f3c13\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.296926 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlgp4\" (UniqueName: \"kubernetes.io/projected/4678775e-476c-4b4c-8832-0715d40f3c13-kube-api-access-nlgp4\") pod \"4678775e-476c-4b4c-8832-0715d40f3c13\" (UID: \"4678775e-476c-4b4c-8832-0715d40f3c13\") " Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.297651 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-utilities" (OuterVolumeSpecName: "utilities") pod "4678775e-476c-4b4c-8832-0715d40f3c13" (UID: "4678775e-476c-4b4c-8832-0715d40f3c13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.302453 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4678775e-476c-4b4c-8832-0715d40f3c13-kube-api-access-nlgp4" (OuterVolumeSpecName: "kube-api-access-nlgp4") pod "4678775e-476c-4b4c-8832-0715d40f3c13" (UID: "4678775e-476c-4b4c-8832-0715d40f3c13"). InnerVolumeSpecName "kube-api-access-nlgp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.340786 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4678775e-476c-4b4c-8832-0715d40f3c13" (UID: "4678775e-476c-4b4c-8832-0715d40f3c13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.402771 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.402864 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4678775e-476c-4b4c-8832-0715d40f3c13-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:20:02 crc kubenswrapper[4928]: I1122 00:20:02.402877 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlgp4\" (UniqueName: \"kubernetes.io/projected/4678775e-476c-4b4c-8832-0715d40f3c13-kube-api-access-nlgp4\") on node \"crc\" DevicePath \"\"" Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.109274 4928 generic.go:334] "Generic (PLEG): container finished" podID="ef029927-44f9-4500-a0f1-8b6215da9736" containerID="7ce320fa39f016524edb3f3d760982ca553d626ff4b161dc1b7e865afe556a1a" exitCode=0 Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.109534 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwqt5" event={"ID":"ef029927-44f9-4500-a0f1-8b6215da9736","Type":"ContainerDied","Data":"7ce320fa39f016524edb3f3d760982ca553d626ff4b161dc1b7e865afe556a1a"} Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.111498 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gh4xn_4678775e-476c-4b4c-8832-0715d40f3c13/registry-server/0.log" Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.113316 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh4xn" event={"ID":"4678775e-476c-4b4c-8832-0715d40f3c13","Type":"ContainerDied","Data":"f470758d0b989a0359cc2492cf41de89fae94614d9e2de2c6ffce909f4fbd9fe"} Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.113383 4928 scope.go:117] "RemoveContainer" containerID="00d3a8326144fc506e57213594c8345bb769547066a3d521962cf36f185d04ca" Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.113407 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh4xn" Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.144016 4928 scope.go:117] "RemoveContainer" containerID="e511a8758fedd800752b1331faa35f3f6d2097e6bd6b246900f7f3d11926b512" Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.160632 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gh4xn"] Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.166353 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gh4xn"] Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.172175 4928 scope.go:117] "RemoveContainer" containerID="a9a7a05dc57fbfa6dc2ca4979db77bed1da1692247106539af0b6d6346165bc7" Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.626321 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" path="/var/lib/kubelet/pods/4678775e-476c-4b4c-8832-0715d40f3c13/volumes" Nov 22 00:20:03 crc kubenswrapper[4928]: I1122 00:20:03.994766 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Nov 22 00:20:04 crc kubenswrapper[4928]: I1122 00:20:04.120418 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwqt5" event={"ID":"ef029927-44f9-4500-a0f1-8b6215da9736","Type":"ContainerStarted","Data":"93e07ebcb152605618b9ec99505cb80b12b5a2f744d611fca0ba11bf66bd2561"} Nov 22 00:20:04 crc kubenswrapper[4928]: I1122 00:20:04.141593 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jwqt5" podStartSLOduration=3.726407283 podStartE2EDuration="5.14157349s" podCreationTimestamp="2025-11-22 00:19:59 +0000 UTC" firstStartedPulling="2025-11-22 00:20:02.103966176 +0000 UTC m=+837.391997367" lastFinishedPulling="2025-11-22 00:20:03.519132383 +0000 UTC m=+838.807163574" observedRunningTime="2025-11-22 00:20:04.137996153 +0000 UTC m=+839.426027364" watchObservedRunningTime="2025-11-22 00:20:04.14157349 +0000 UTC m=+839.429604681" Nov 22 00:20:09 crc kubenswrapper[4928]: I1122 00:20:09.422724 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:20:09 crc kubenswrapper[4928]: I1122 00:20:09.423322 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:20:09 crc kubenswrapper[4928]: I1122 00:20:09.466192 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:20:10 crc kubenswrapper[4928]: I1122 00:20:10.233401 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:20:10 crc kubenswrapper[4928]: I1122 00:20:10.697336 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jwqt5"] Nov 22 00:20:12 crc kubenswrapper[4928]: I1122 00:20:12.198271 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jwqt5" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="registry-server" containerID="cri-o://93e07ebcb152605618b9ec99505cb80b12b5a2f744d611fca0ba11bf66bd2561" gracePeriod=2 Nov 22 00:20:15 crc kubenswrapper[4928]: I1122 00:20:15.261491 4928 generic.go:334] "Generic (PLEG): container finished" podID="ef029927-44f9-4500-a0f1-8b6215da9736" containerID="93e07ebcb152605618b9ec99505cb80b12b5a2f744d611fca0ba11bf66bd2561" exitCode=0 Nov 22 00:20:15 crc kubenswrapper[4928]: I1122 00:20:15.261602 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwqt5" event={"ID":"ef029927-44f9-4500-a0f1-8b6215da9736","Type":"ContainerDied","Data":"93e07ebcb152605618b9ec99505cb80b12b5a2f744d611fca0ba11bf66bd2561"} Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.741517 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.798779 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-catalog-content\") pod \"ef029927-44f9-4500-a0f1-8b6215da9736\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.799247 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-utilities\") pod \"ef029927-44f9-4500-a0f1-8b6215da9736\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.799323 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mvcl\" (UniqueName: \"kubernetes.io/projected/ef029927-44f9-4500-a0f1-8b6215da9736-kube-api-access-7mvcl\") pod \"ef029927-44f9-4500-a0f1-8b6215da9736\" (UID: \"ef029927-44f9-4500-a0f1-8b6215da9736\") " Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.801414 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-utilities" (OuterVolumeSpecName: "utilities") pod "ef029927-44f9-4500-a0f1-8b6215da9736" (UID: "ef029927-44f9-4500-a0f1-8b6215da9736"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.807816 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef029927-44f9-4500-a0f1-8b6215da9736-kube-api-access-7mvcl" (OuterVolumeSpecName: "kube-api-access-7mvcl") pod "ef029927-44f9-4500-a0f1-8b6215da9736" (UID: "ef029927-44f9-4500-a0f1-8b6215da9736"). InnerVolumeSpecName "kube-api-access-7mvcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.846743 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef029927-44f9-4500-a0f1-8b6215da9736" (UID: "ef029927-44f9-4500-a0f1-8b6215da9736"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.901022 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mvcl\" (UniqueName: \"kubernetes.io/projected/ef029927-44f9-4500-a0f1-8b6215da9736-kube-api-access-7mvcl\") on node \"crc\" DevicePath \"\"" Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.901078 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:20:16 crc kubenswrapper[4928]: I1122 00:20:16.901092 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef029927-44f9-4500-a0f1-8b6215da9736-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.278651 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jwqt5" event={"ID":"ef029927-44f9-4500-a0f1-8b6215da9736","Type":"ContainerDied","Data":"43e873d965a728bbb4db9edcdfcc47e4499263229c140ed22b9b4862568c22c8"} Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.278965 4928 scope.go:117] "RemoveContainer" containerID="93e07ebcb152605618b9ec99505cb80b12b5a2f744d611fca0ba11bf66bd2561" Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.278745 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jwqt5" Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.301717 4928 scope.go:117] "RemoveContainer" containerID="7ce320fa39f016524edb3f3d760982ca553d626ff4b161dc1b7e865afe556a1a" Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.308455 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jwqt5"] Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.321649 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jwqt5"] Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.328784 4928 scope.go:117] "RemoveContainer" containerID="d275c558cf980103f2383e83fb89d670bf7f177b7c237fe65c541bc422397140" Nov 22 00:20:17 crc kubenswrapper[4928]: I1122 00:20:17.629170 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" path="/var/lib/kubelet/pods/ef029927-44f9-4500-a0f1-8b6215da9736/volumes" Nov 22 00:21:57 crc kubenswrapper[4928]: I1122 00:21:57.378832 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fe966661-429e-44de-8e6a-7ff2aa20dc93","Type":"ContainerDied","Data":"eff25d78d96fbd319bbbea45100a39b8018bfb35e1577a5c6931fb3f551f4caa"} Nov 22 00:21:57 crc kubenswrapper[4928]: I1122 00:21:57.378773 4928 generic.go:334] "Generic (PLEG): container finished" podID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerID="eff25d78d96fbd319bbbea45100a39b8018bfb35e1577a5c6931fb3f551f4caa" exitCode=0 Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.669315 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814423 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpxrz\" (UniqueName: \"kubernetes.io/projected/fe966661-429e-44de-8e6a-7ff2aa20dc93-kube-api-access-mpxrz\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814492 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-run\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814563 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildcachedir\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814591 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-pull\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814616 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-blob-cache\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814656 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-proxy-ca-bundles\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814695 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-root\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814720 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-node-pullsecrets\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814757 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildworkdir\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814785 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-system-configs\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814891 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-push\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814873 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814917 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-ca-bundles\") pod \"fe966661-429e-44de-8e6a-7ff2aa20dc93\" (UID: \"fe966661-429e-44de-8e6a-7ff2aa20dc93\") " Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.814889 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.815204 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.815222 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.816138 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.816246 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.816310 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.816771 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.820983 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.821322 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe966661-429e-44de-8e6a-7ff2aa20dc93-kube-api-access-mpxrz" (OuterVolumeSpecName: "kube-api-access-mpxrz") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "kube-api-access-mpxrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.822623 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.870079 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916161 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916456 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916474 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916488 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916496 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fe966661-429e-44de-8e6a-7ff2aa20dc93-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916505 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916513 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpxrz\" (UniqueName: \"kubernetes.io/projected/fe966661-429e-44de-8e6a-7ff2aa20dc93-kube-api-access-mpxrz\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:58 crc kubenswrapper[4928]: I1122 00:21:58.916522 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:21:59 crc kubenswrapper[4928]: I1122 00:21:59.395436 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"fe966661-429e-44de-8e6a-7ff2aa20dc93","Type":"ContainerDied","Data":"699f50f670d2b990157ee3f1cc38a1c4fd883a4165a9c56266d28bc2eac826bc"} Nov 22 00:21:59 crc kubenswrapper[4928]: I1122 00:21:59.395476 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="699f50f670d2b990157ee3f1cc38a1c4fd883a4165a9c56266d28bc2eac826bc" Nov 22 00:21:59 crc kubenswrapper[4928]: I1122 00:21:59.395494 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 22 00:21:59 crc kubenswrapper[4928]: I1122 00:21:59.623673 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:21:59 crc kubenswrapper[4928]: I1122 00:21:59.626389 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:00 crc kubenswrapper[4928]: I1122 00:22:00.907235 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "fe966661-429e-44de-8e6a-7ff2aa20dc93" (UID: "fe966661-429e-44de-8e6a-7ff2aa20dc93"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:22:00 crc kubenswrapper[4928]: I1122 00:22:00.942541 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe966661-429e-44de-8e6a-7ff2aa20dc93-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432024 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432586 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="registry-server" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432604 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="registry-server" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432616 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerName="docker-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432624 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerName="docker-build" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432640 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="extract-utilities" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432647 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="extract-utilities" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432660 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerName="manage-dockerfile" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432668 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerName="manage-dockerfile" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432679 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="extract-content" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432686 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="extract-content" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432701 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="extract-utilities" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432708 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="extract-utilities" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432720 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerName="git-clone" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432727 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerName="git-clone" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432738 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="registry-server" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432745 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="registry-server" Nov 22 00:22:03 crc kubenswrapper[4928]: E1122 00:22:03.432757 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="extract-content" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432764 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="extract-content" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432897 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe966661-429e-44de-8e6a-7ff2aa20dc93" containerName="docker-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432912 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="4678775e-476c-4b4c-8832-0715d40f3c13" containerName="registry-server" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.432926 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef029927-44f9-4500-a0f1-8b6215da9736" containerName="registry-server" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.433667 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.437409 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.437863 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.438028 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.438444 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.451441 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.476700 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477023 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477114 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477240 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477293 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477337 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477366 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477381 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mznn\" (UniqueName: \"kubernetes.io/projected/0c7185da-47bf-493a-aae1-4236cd4c8267-kube-api-access-9mznn\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477401 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477423 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-push\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477444 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.477467 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579498 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579551 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579569 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579594 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579623 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579645 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579665 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579679 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mznn\" (UniqueName: \"kubernetes.io/projected/0c7185da-47bf-493a-aae1-4236cd4c8267-kube-api-access-9mznn\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579707 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579729 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-push\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579746 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579768 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.579953 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.580036 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.580478 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.580656 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.580673 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.580695 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.580964 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.581132 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.581451 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.585329 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.587241 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-push\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.597274 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mznn\" (UniqueName: \"kubernetes.io/projected/0c7185da-47bf-493a-aae1-4236cd4c8267-kube-api-access-9mznn\") pod \"smart-gateway-operator-1-build\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.750668 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:03 crc kubenswrapper[4928]: I1122 00:22:03.922007 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 22 00:22:04 crc kubenswrapper[4928]: I1122 00:22:04.425949 4928 generic.go:334] "Generic (PLEG): container finished" podID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerID="384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b" exitCode=0 Nov 22 00:22:04 crc kubenswrapper[4928]: I1122 00:22:04.426016 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"0c7185da-47bf-493a-aae1-4236cd4c8267","Type":"ContainerDied","Data":"384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b"} Nov 22 00:22:04 crc kubenswrapper[4928]: I1122 00:22:04.426233 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"0c7185da-47bf-493a-aae1-4236cd4c8267","Type":"ContainerStarted","Data":"f70d5a723ed1f303670fc409588ea4eb3a03541667b543879af0a9b34e1c10a2"} Nov 22 00:22:05 crc kubenswrapper[4928]: I1122 00:22:05.434651 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"0c7185da-47bf-493a-aae1-4236cd4c8267","Type":"ContainerStarted","Data":"6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218"} Nov 22 00:22:05 crc kubenswrapper[4928]: I1122 00:22:05.455855 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=2.455836489 podStartE2EDuration="2.455836489s" podCreationTimestamp="2025-11-22 00:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:05.453646938 +0000 UTC m=+960.741678129" watchObservedRunningTime="2025-11-22 00:22:05.455836489 +0000 UTC m=+960.743867680" Nov 22 00:22:06 crc kubenswrapper[4928]: I1122 00:22:06.707650 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:22:06 crc kubenswrapper[4928]: I1122 00:22:06.708006 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:22:14 crc kubenswrapper[4928]: I1122 00:22:14.100968 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 22 00:22:14 crc kubenswrapper[4928]: I1122 00:22:14.101196 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerName="docker-build" containerID="cri-o://6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218" gracePeriod=30 Nov 22 00:22:14 crc kubenswrapper[4928]: I1122 00:22:14.953880 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_0c7185da-47bf-493a-aae1-4236cd4c8267/docker-build/0.log" Nov 22 00:22:14 crc kubenswrapper[4928]: I1122 00:22:14.954617 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131395 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-pull\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131469 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-buildworkdir\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131491 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-run\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131521 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-ca-bundles\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131543 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mznn\" (UniqueName: \"kubernetes.io/projected/0c7185da-47bf-493a-aae1-4236cd4c8267-kube-api-access-9mznn\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131613 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-buildcachedir\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131657 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-build-blob-cache\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131680 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-node-pullsecrets\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131712 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-proxy-ca-bundles\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131729 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-system-configs\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131751 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-push\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.131773 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-root\") pod \"0c7185da-47bf-493a-aae1-4236cd4c8267\" (UID: \"0c7185da-47bf-493a-aae1-4236cd4c8267\") " Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.132026 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.132812 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.132846 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.132965 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.133545 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.133814 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.134310 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.138828 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7185da-47bf-493a-aae1-4236cd4c8267-kube-api-access-9mznn" (OuterVolumeSpecName: "kube-api-access-9mznn") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "kube-api-access-9mznn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.139066 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.139965 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233203 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233233 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0c7185da-47bf-493a-aae1-4236cd4c8267-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233241 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233253 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233262 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233279 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/0c7185da-47bf-493a-aae1-4236cd4c8267-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233293 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233303 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233311 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c7185da-47bf-493a-aae1-4236cd4c8267-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.233321 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mznn\" (UniqueName: \"kubernetes.io/projected/0c7185da-47bf-493a-aae1-4236cd4c8267-kube-api-access-9mznn\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.304746 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.334244 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.497655 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_0c7185da-47bf-493a-aae1-4236cd4c8267/docker-build/0.log" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.498472 4928 generic.go:334] "Generic (PLEG): container finished" podID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerID="6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218" exitCode=1 Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.498543 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"0c7185da-47bf-493a-aae1-4236cd4c8267","Type":"ContainerDied","Data":"6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218"} Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.498651 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"0c7185da-47bf-493a-aae1-4236cd4c8267","Type":"ContainerDied","Data":"f70d5a723ed1f303670fc409588ea4eb3a03541667b543879af0a9b34e1c10a2"} Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.498677 4928 scope.go:117] "RemoveContainer" containerID="6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.498755 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.565529 4928 scope.go:117] "RemoveContainer" containerID="384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.567968 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0c7185da-47bf-493a-aae1-4236cd4c8267" (UID: "0c7185da-47bf-493a-aae1-4236cd4c8267"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.589953 4928 scope.go:117] "RemoveContainer" containerID="6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218" Nov 22 00:22:15 crc kubenswrapper[4928]: E1122 00:22:15.590509 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218\": container with ID starting with 6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218 not found: ID does not exist" containerID="6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.590551 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218"} err="failed to get container status \"6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218\": rpc error: code = NotFound desc = could not find container \"6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218\": container with ID starting with 6e28a05e58f3945ca9e91162bc071010c737b1af984553acd55a70490917e218 not found: ID does not exist" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.590585 4928 scope.go:117] "RemoveContainer" containerID="384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b" Nov 22 00:22:15 crc kubenswrapper[4928]: E1122 00:22:15.590961 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b\": container with ID starting with 384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b not found: ID does not exist" containerID="384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.590986 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b"} err="failed to get container status \"384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b\": rpc error: code = NotFound desc = could not find container \"384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b\": container with ID starting with 384f73f90e094cdbd5658400785830b4d296ea7716078b827f7f53b8c266284b not found: ID does not exist" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.638771 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0c7185da-47bf-493a-aae1-4236cd4c8267-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.659062 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 22 00:22:15 crc kubenswrapper[4928]: E1122 00:22:15.659490 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerName="manage-dockerfile" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.659530 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerName="manage-dockerfile" Nov 22 00:22:15 crc kubenswrapper[4928]: E1122 00:22:15.659548 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerName="docker-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.659555 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerName="docker-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.659681 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7185da-47bf-493a-aae1-4236cd4c8267" containerName="docker-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.660768 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.663273 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.664533 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.664761 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.691453 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.827407 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.831198 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841309 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841354 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhzq\" (UniqueName: \"kubernetes.io/projected/50bbc7b5-2171-4059-a071-883759dd137a-kube-api-access-hfhzq\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841376 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841401 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841475 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841558 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841586 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841610 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841638 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841692 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841710 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.841732 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-push\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.943356 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.943507 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.943591 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.943770 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-push\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.944208 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.944405 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.944828 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.944913 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhzq\" (UniqueName: \"kubernetes.io/projected/50bbc7b5-2171-4059-a071-883759dd137a-kube-api-access-hfhzq\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.944954 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.944979 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.945001 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.945036 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.945091 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.945137 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.945183 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.945562 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.946411 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.946595 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.946873 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.947301 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.947678 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.947922 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.957267 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-push\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.970019 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhzq\" (UniqueName: \"kubernetes.io/projected/50bbc7b5-2171-4059-a071-883759dd137a-kube-api-access-hfhzq\") pod \"smart-gateway-operator-2-build\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:15 crc kubenswrapper[4928]: I1122 00:22:15.998694 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:22:16 crc kubenswrapper[4928]: I1122 00:22:16.252001 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 22 00:22:16 crc kubenswrapper[4928]: I1122 00:22:16.528438 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"50bbc7b5-2171-4059-a071-883759dd137a","Type":"ContainerStarted","Data":"b984910bf4316ed5096bfca7f8d45ba8efef79604be073294352995ed21822e0"} Nov 22 00:22:17 crc kubenswrapper[4928]: I1122 00:22:17.536570 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"50bbc7b5-2171-4059-a071-883759dd137a","Type":"ContainerStarted","Data":"f3f89d96c973612c807a5e5ab3c39907e9091924c91a6380abaf07cec54b7ad6"} Nov 22 00:22:17 crc kubenswrapper[4928]: I1122 00:22:17.625505 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7185da-47bf-493a-aae1-4236cd4c8267" path="/var/lib/kubelet/pods/0c7185da-47bf-493a-aae1-4236cd4c8267/volumes" Nov 22 00:22:18 crc kubenswrapper[4928]: I1122 00:22:18.543276 4928 generic.go:334] "Generic (PLEG): container finished" podID="50bbc7b5-2171-4059-a071-883759dd137a" containerID="f3f89d96c973612c807a5e5ab3c39907e9091924c91a6380abaf07cec54b7ad6" exitCode=0 Nov 22 00:22:18 crc kubenswrapper[4928]: I1122 00:22:18.543329 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"50bbc7b5-2171-4059-a071-883759dd137a","Type":"ContainerDied","Data":"f3f89d96c973612c807a5e5ab3c39907e9091924c91a6380abaf07cec54b7ad6"} Nov 22 00:22:19 crc kubenswrapper[4928]: I1122 00:22:19.551409 4928 generic.go:334] "Generic (PLEG): container finished" podID="50bbc7b5-2171-4059-a071-883759dd137a" containerID="25f5b6f5338d4bd384dcf38a9eca2638ec48a9e7d3bef1d0fa99e63f1f9822b8" exitCode=0 Nov 22 00:22:19 crc kubenswrapper[4928]: I1122 00:22:19.551466 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"50bbc7b5-2171-4059-a071-883759dd137a","Type":"ContainerDied","Data":"25f5b6f5338d4bd384dcf38a9eca2638ec48a9e7d3bef1d0fa99e63f1f9822b8"} Nov 22 00:22:19 crc kubenswrapper[4928]: I1122 00:22:19.595859 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_50bbc7b5-2171-4059-a071-883759dd137a/manage-dockerfile/0.log" Nov 22 00:22:20 crc kubenswrapper[4928]: I1122 00:22:20.563658 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"50bbc7b5-2171-4059-a071-883759dd137a","Type":"ContainerStarted","Data":"34732099678c513085e030e3eb9df5d5641fc523378a03e0630a501fdafd3524"} Nov 22 00:22:20 crc kubenswrapper[4928]: I1122 00:22:20.603348 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.603323959 podStartE2EDuration="5.603323959s" podCreationTimestamp="2025-11-22 00:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:20.598812846 +0000 UTC m=+975.886844037" watchObservedRunningTime="2025-11-22 00:22:20.603323959 +0000 UTC m=+975.891355150" Nov 22 00:22:36 crc kubenswrapper[4928]: I1122 00:22:36.707489 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:22:36 crc kubenswrapper[4928]: I1122 00:22:36.708131 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:23:06 crc kubenswrapper[4928]: I1122 00:23:06.706845 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:23:06 crc kubenswrapper[4928]: I1122 00:23:06.707355 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:23:06 crc kubenswrapper[4928]: I1122 00:23:06.707396 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:23:06 crc kubenswrapper[4928]: I1122 00:23:06.707926 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae127a951fcea164fc2fff2b44252ffdfd5172caf45047681eadba5c13f25fb2"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:23:06 crc kubenswrapper[4928]: I1122 00:23:06.707974 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://ae127a951fcea164fc2fff2b44252ffdfd5172caf45047681eadba5c13f25fb2" gracePeriod=600 Nov 22 00:23:07 crc kubenswrapper[4928]: I1122 00:23:07.862574 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="ae127a951fcea164fc2fff2b44252ffdfd5172caf45047681eadba5c13f25fb2" exitCode=0 Nov 22 00:23:07 crc kubenswrapper[4928]: I1122 00:23:07.862645 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"ae127a951fcea164fc2fff2b44252ffdfd5172caf45047681eadba5c13f25fb2"} Nov 22 00:23:07 crc kubenswrapper[4928]: I1122 00:23:07.862964 4928 scope.go:117] "RemoveContainer" containerID="32211f58683787fb757bd9a4fc769db246d8d45c2afe49257b9494ed6fed87fd" Nov 22 00:23:08 crc kubenswrapper[4928]: I1122 00:23:08.872305 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"dd8868cc76de4a44c75c379e88a3922b5b51242da6b3cba52bc0cd03f7e25709"} Nov 22 00:23:37 crc kubenswrapper[4928]: I1122 00:23:37.028558 4928 generic.go:334] "Generic (PLEG): container finished" podID="50bbc7b5-2171-4059-a071-883759dd137a" containerID="34732099678c513085e030e3eb9df5d5641fc523378a03e0630a501fdafd3524" exitCode=0 Nov 22 00:23:37 crc kubenswrapper[4928]: I1122 00:23:37.028634 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"50bbc7b5-2171-4059-a071-883759dd137a","Type":"ContainerDied","Data":"34732099678c513085e030e3eb9df5d5641fc523378a03e0630a501fdafd3524"} Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.281381 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.438762 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-push\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.438850 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-node-pullsecrets\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.438885 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-proxy-ca-bundles\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.438917 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-pull\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.438976 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-build-blob-cache\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.438915 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.438998 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-buildworkdir\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439086 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-system-configs\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439139 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-run\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439179 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-buildcachedir\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439231 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-ca-bundles\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439269 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-root\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439303 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439345 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfhzq\" (UniqueName: \"kubernetes.io/projected/50bbc7b5-2171-4059-a071-883759dd137a-kube-api-access-hfhzq\") pod \"50bbc7b5-2171-4059-a071-883759dd137a\" (UID: \"50bbc7b5-2171-4059-a071-883759dd137a\") " Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439840 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439861 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/50bbc7b5-2171-4059-a071-883759dd137a-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.439994 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.440073 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.440373 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.440375 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.441186 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.447046 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.447038 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bbc7b5-2171-4059-a071-883759dd137a-kube-api-access-hfhzq" (OuterVolumeSpecName: "kube-api-access-hfhzq") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "kube-api-access-hfhzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.447424 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.541548 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.541962 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfhzq\" (UniqueName: \"kubernetes.io/projected/50bbc7b5-2171-4059-a071-883759dd137a-kube-api-access-hfhzq\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.541977 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.541991 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.542004 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/50bbc7b5-2171-4059-a071-883759dd137a-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.542017 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.542028 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/50bbc7b5-2171-4059-a071-883759dd137a-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.542040 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.641160 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:38 crc kubenswrapper[4928]: I1122 00:23:38.643148 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:39 crc kubenswrapper[4928]: I1122 00:23:39.042781 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"50bbc7b5-2171-4059-a071-883759dd137a","Type":"ContainerDied","Data":"b984910bf4316ed5096bfca7f8d45ba8efef79604be073294352995ed21822e0"} Nov 22 00:23:39 crc kubenswrapper[4928]: I1122 00:23:39.042845 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b984910bf4316ed5096bfca7f8d45ba8efef79604be073294352995ed21822e0" Nov 22 00:23:39 crc kubenswrapper[4928]: I1122 00:23:39.042922 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 22 00:23:40 crc kubenswrapper[4928]: I1122 00:23:40.478380 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "50bbc7b5-2171-4059-a071-883759dd137a" (UID: "50bbc7b5-2171-4059-a071-883759dd137a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:40 crc kubenswrapper[4928]: I1122 00:23:40.567962 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/50bbc7b5-2171-4059-a071-883759dd137a-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.799635 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 22 00:23:42 crc kubenswrapper[4928]: E1122 00:23:42.800376 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bbc7b5-2171-4059-a071-883759dd137a" containerName="manage-dockerfile" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.800393 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bbc7b5-2171-4059-a071-883759dd137a" containerName="manage-dockerfile" Nov 22 00:23:42 crc kubenswrapper[4928]: E1122 00:23:42.800406 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bbc7b5-2171-4059-a071-883759dd137a" containerName="docker-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.800413 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bbc7b5-2171-4059-a071-883759dd137a" containerName="docker-build" Nov 22 00:23:42 crc kubenswrapper[4928]: E1122 00:23:42.800441 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bbc7b5-2171-4059-a071-883759dd137a" containerName="git-clone" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.800450 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bbc7b5-2171-4059-a071-883759dd137a" containerName="git-clone" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.800581 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="50bbc7b5-2171-4059-a071-883759dd137a" containerName="docker-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.801377 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.803988 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.804067 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.804173 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.812070 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.820161 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.998061 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-system-configs\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.998436 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.998572 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildworkdir\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.998700 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildcachedir\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.998861 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-push\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.999037 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-root\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.999209 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnrs\" (UniqueName: \"kubernetes.io/projected/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-kube-api-access-psnrs\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.999343 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.999459 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.999577 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.999698 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-pull\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:42 crc kubenswrapper[4928]: I1122 00:23:42.999828 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-run\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101507 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101555 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildworkdir\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101585 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildcachedir\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101617 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-push\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101640 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-root\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101658 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnrs\" (UniqueName: \"kubernetes.io/projected/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-kube-api-access-psnrs\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101681 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101697 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101721 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101739 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-pull\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101760 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-run\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.101787 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-system-configs\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.102656 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-system-configs\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.102818 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.102818 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.102873 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.102910 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildcachedir\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.103008 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildworkdir\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.103295 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-root\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.103493 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-run\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.103696 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.108624 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-push\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.116741 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-pull\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.119290 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnrs\" (UniqueName: \"kubernetes.io/projected/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-kube-api-access-psnrs\") pod \"sg-core-1-build\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.122104 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 22 00:23:43 crc kubenswrapper[4928]: I1122 00:23:43.301392 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 22 00:23:44 crc kubenswrapper[4928]: I1122 00:23:44.074604 4928 generic.go:334] "Generic (PLEG): container finished" podID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerID="1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1" exitCode=0 Nov 22 00:23:44 crc kubenswrapper[4928]: I1122 00:23:44.074659 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0","Type":"ContainerDied","Data":"1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1"} Nov 22 00:23:44 crc kubenswrapper[4928]: I1122 00:23:44.074687 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0","Type":"ContainerStarted","Data":"5f4463c5729c2725f93fe3929fd5b08aceca187b16989782336befaac7d83a64"} Nov 22 00:23:45 crc kubenswrapper[4928]: I1122 00:23:45.083033 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0","Type":"ContainerStarted","Data":"4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a"} Nov 22 00:23:45 crc kubenswrapper[4928]: I1122 00:23:45.105974 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.105955649 podStartE2EDuration="3.105955649s" podCreationTimestamp="2025-11-22 00:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:23:45.10378389 +0000 UTC m=+1060.391815081" watchObservedRunningTime="2025-11-22 00:23:45.105955649 +0000 UTC m=+1060.393986850" Nov 22 00:23:53 crc kubenswrapper[4928]: I1122 00:23:53.149621 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 22 00:23:53 crc kubenswrapper[4928]: I1122 00:23:53.150345 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerName="docker-build" containerID="cri-o://4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a" gracePeriod=30 Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.011124 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0/docker-build/0.log" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.012013 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.138567 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0/docker-build/0.log" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.138930 4928 generic.go:334] "Generic (PLEG): container finished" podID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerID="4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a" exitCode=1 Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.138970 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0","Type":"ContainerDied","Data":"4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a"} Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.138992 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.139010 4928 scope.go:117] "RemoveContainer" containerID="4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.138998 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0","Type":"ContainerDied","Data":"5f4463c5729c2725f93fe3929fd5b08aceca187b16989782336befaac7d83a64"} Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142259 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnrs\" (UniqueName: \"kubernetes.io/projected/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-kube-api-access-psnrs\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142346 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildworkdir\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142522 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-push\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142560 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-ca-bundles\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142588 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-node-pullsecrets\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142617 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-pull\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142654 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-run\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142676 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildcachedir\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142713 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-root\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142736 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-blob-cache\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142760 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-proxy-ca-bundles\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.142848 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-system-configs\") pod \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\" (UID: \"8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0\") " Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.143443 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.143489 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.143513 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.144223 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.144504 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.144576 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.144826 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.147891 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.147971 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-kube-api-access-psnrs" (OuterVolumeSpecName: "kube-api-access-psnrs") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "kube-api-access-psnrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.157939 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.176458 4928 scope.go:117] "RemoveContainer" containerID="1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.196915 4928 scope.go:117] "RemoveContainer" containerID="4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a" Nov 22 00:23:54 crc kubenswrapper[4928]: E1122 00:23:54.197392 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a\": container with ID starting with 4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a not found: ID does not exist" containerID="4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.197439 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a"} err="failed to get container status \"4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a\": rpc error: code = NotFound desc = could not find container \"4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a\": container with ID starting with 4845637fc05fbb59eba6539b29188a7b3112fa6fd2c5092abd4fb1c7ee47aa1a not found: ID does not exist" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.197468 4928 scope.go:117] "RemoveContainer" containerID="1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1" Nov 22 00:23:54 crc kubenswrapper[4928]: E1122 00:23:54.197887 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1\": container with ID starting with 1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1 not found: ID does not exist" containerID="1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.197924 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1"} err="failed to get container status \"1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1\": rpc error: code = NotFound desc = could not find container \"1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1\": container with ID starting with 1cc40cd0f4cf4c5a4e86cd6add7fdcbbd4cc353a557281c149d8ad5a7ca453a1 not found: ID does not exist" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244068 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244097 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244107 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244115 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244124 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244132 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244139 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psnrs\" (UniqueName: \"kubernetes.io/projected/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-kube-api-access-psnrs\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244147 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244155 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.244162 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.246267 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.345540 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.402266 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" (UID: "8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.447031 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.475043 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.481498 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.751147 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 22 00:23:54 crc kubenswrapper[4928]: E1122 00:23:54.751438 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerName="manage-dockerfile" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.751451 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerName="manage-dockerfile" Nov 22 00:23:54 crc kubenswrapper[4928]: E1122 00:23:54.751464 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerName="docker-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.751470 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerName="docker-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.751603 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" containerName="docker-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.752526 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.754847 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.755196 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.755231 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.755439 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.773007 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852618 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-root\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852671 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852708 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-system-configs\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852730 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-pull\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852748 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7r9c\" (UniqueName: \"kubernetes.io/projected/5559a223-ca10-4fe0-aaaa-07fe8360a52b-kube-api-access-l7r9c\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852768 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-push\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852804 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852830 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.852961 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.853035 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildcachedir\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.853053 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-run\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.853073 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildworkdir\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954221 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-system-configs\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954289 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-pull\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954320 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7r9c\" (UniqueName: \"kubernetes.io/projected/5559a223-ca10-4fe0-aaaa-07fe8360a52b-kube-api-access-l7r9c\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954351 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-push\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954381 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954414 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954434 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954460 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildcachedir\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954484 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-run\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954509 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildworkdir\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954562 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-root\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954588 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954631 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildcachedir\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.955334 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.955364 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildworkdir\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.954680 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.955622 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-root\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.955640 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.955702 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-run\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.955956 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.956290 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-system-configs\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.958381 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-push\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.959175 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-pull\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:54 crc kubenswrapper[4928]: I1122 00:23:54.971103 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7r9c\" (UniqueName: \"kubernetes.io/projected/5559a223-ca10-4fe0-aaaa-07fe8360a52b-kube-api-access-l7r9c\") pod \"sg-core-2-build\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " pod="service-telemetry/sg-core-2-build" Nov 22 00:23:55 crc kubenswrapper[4928]: I1122 00:23:55.070092 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 22 00:23:55 crc kubenswrapper[4928]: I1122 00:23:55.242749 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 22 00:23:55 crc kubenswrapper[4928]: I1122 00:23:55.625188 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0" path="/var/lib/kubelet/pods/8f383c7b-fb9e-4f04-9f30-3f4bbf0bd8a0/volumes" Nov 22 00:23:56 crc kubenswrapper[4928]: I1122 00:23:56.160122 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"5559a223-ca10-4fe0-aaaa-07fe8360a52b","Type":"ContainerStarted","Data":"c5d49c7a65e140ac710e4490eeadb8517c8443affd99baf376e9360834b7061e"} Nov 22 00:23:56 crc kubenswrapper[4928]: I1122 00:23:56.160614 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"5559a223-ca10-4fe0-aaaa-07fe8360a52b","Type":"ContainerStarted","Data":"60fe1a91ef3b5352fb8bef9a4cd4b833179ac9a67fa3546fded15b01377c4b47"} Nov 22 00:23:57 crc kubenswrapper[4928]: I1122 00:23:57.168993 4928 generic.go:334] "Generic (PLEG): container finished" podID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerID="c5d49c7a65e140ac710e4490eeadb8517c8443affd99baf376e9360834b7061e" exitCode=0 Nov 22 00:23:57 crc kubenswrapper[4928]: I1122 00:23:57.169045 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"5559a223-ca10-4fe0-aaaa-07fe8360a52b","Type":"ContainerDied","Data":"c5d49c7a65e140ac710e4490eeadb8517c8443affd99baf376e9360834b7061e"} Nov 22 00:23:58 crc kubenswrapper[4928]: I1122 00:23:58.177395 4928 generic.go:334] "Generic (PLEG): container finished" podID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerID="919ba1e17e91e3f18e0fc90f2bc26820d64af219ab7e080d9b951747cb4b6d90" exitCode=0 Nov 22 00:23:58 crc kubenswrapper[4928]: I1122 00:23:58.177604 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"5559a223-ca10-4fe0-aaaa-07fe8360a52b","Type":"ContainerDied","Data":"919ba1e17e91e3f18e0fc90f2bc26820d64af219ab7e080d9b951747cb4b6d90"} Nov 22 00:23:58 crc kubenswrapper[4928]: I1122 00:23:58.211045 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_5559a223-ca10-4fe0-aaaa-07fe8360a52b/manage-dockerfile/0.log" Nov 22 00:23:59 crc kubenswrapper[4928]: I1122 00:23:59.187184 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"5559a223-ca10-4fe0-aaaa-07fe8360a52b","Type":"ContainerStarted","Data":"c4257943855ed0e5ca21158f6f83c5b5a128e1ccbd51b868c003ee4bc0bf5e4f"} Nov 22 00:23:59 crc kubenswrapper[4928]: I1122 00:23:59.214051 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.2140294 podStartE2EDuration="5.2140294s" podCreationTimestamp="2025-11-22 00:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:23:59.210016152 +0000 UTC m=+1074.498047363" watchObservedRunningTime="2025-11-22 00:23:59.2140294 +0000 UTC m=+1074.502060591" Nov 22 00:25:36 crc kubenswrapper[4928]: I1122 00:25:36.707657 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:25:36 crc kubenswrapper[4928]: I1122 00:25:36.708566 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:26:06 crc kubenswrapper[4928]: I1122 00:26:06.707788 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:26:06 crc kubenswrapper[4928]: I1122 00:26:06.708915 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:26:36 crc kubenswrapper[4928]: I1122 00:26:36.707611 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:26:36 crc kubenswrapper[4928]: I1122 00:26:36.708220 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:26:36 crc kubenswrapper[4928]: I1122 00:26:36.708281 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:26:36 crc kubenswrapper[4928]: I1122 00:26:36.709043 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd8868cc76de4a44c75c379e88a3922b5b51242da6b3cba52bc0cd03f7e25709"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:26:36 crc kubenswrapper[4928]: I1122 00:26:36.709113 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://dd8868cc76de4a44c75c379e88a3922b5b51242da6b3cba52bc0cd03f7e25709" gracePeriod=600 Nov 22 00:26:37 crc kubenswrapper[4928]: I1122 00:26:37.202832 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="dd8868cc76de4a44c75c379e88a3922b5b51242da6b3cba52bc0cd03f7e25709" exitCode=0 Nov 22 00:26:37 crc kubenswrapper[4928]: I1122 00:26:37.202908 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"dd8868cc76de4a44c75c379e88a3922b5b51242da6b3cba52bc0cd03f7e25709"} Nov 22 00:26:37 crc kubenswrapper[4928]: I1122 00:26:37.203268 4928 scope.go:117] "RemoveContainer" containerID="ae127a951fcea164fc2fff2b44252ffdfd5172caf45047681eadba5c13f25fb2" Nov 22 00:26:39 crc kubenswrapper[4928]: I1122 00:26:39.220670 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"2313a8e1c327d21680589179d678b34e89c05b2d249408c247573805e438ec91"} Nov 22 00:26:58 crc kubenswrapper[4928]: I1122 00:26:58.330210 4928 generic.go:334] "Generic (PLEG): container finished" podID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerID="c4257943855ed0e5ca21158f6f83c5b5a128e1ccbd51b868c003ee4bc0bf5e4f" exitCode=0 Nov 22 00:26:58 crc kubenswrapper[4928]: I1122 00:26:58.330315 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"5559a223-ca10-4fe0-aaaa-07fe8360a52b","Type":"ContainerDied","Data":"c4257943855ed0e5ca21158f6f83c5b5a128e1ccbd51b868c003ee4bc0bf5e4f"} Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.613710 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707281 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-pull\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707350 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-run\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707439 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7r9c\" (UniqueName: \"kubernetes.io/projected/5559a223-ca10-4fe0-aaaa-07fe8360a52b-kube-api-access-l7r9c\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707504 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildworkdir\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707531 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-push\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707578 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-node-pullsecrets\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707603 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-root\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707657 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-system-configs\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707697 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-ca-bundles\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707721 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildcachedir\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707744 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-blob-cache\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.707778 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-proxy-ca-bundles\") pod \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\" (UID: \"5559a223-ca10-4fe0-aaaa-07fe8360a52b\") " Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.708039 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.708169 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.708227 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.708481 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.709514 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.710126 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.710531 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.714163 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5559a223-ca10-4fe0-aaaa-07fe8360a52b-kube-api-access-l7r9c" (OuterVolumeSpecName: "kube-api-access-l7r9c") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "kube-api-access-l7r9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.714487 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.714895 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.718774 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809612 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809648 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809660 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809668 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809676 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5559a223-ca10-4fe0-aaaa-07fe8360a52b-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809685 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809692 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/5559a223-ca10-4fe0-aaaa-07fe8360a52b-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809700 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:26:59 crc kubenswrapper[4928]: I1122 00:26:59.809709 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7r9c\" (UniqueName: \"kubernetes.io/projected/5559a223-ca10-4fe0-aaaa-07fe8360a52b-kube-api-access-l7r9c\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:00 crc kubenswrapper[4928]: I1122 00:27:00.030158 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:27:00 crc kubenswrapper[4928]: I1122 00:27:00.114146 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:00 crc kubenswrapper[4928]: I1122 00:27:00.346454 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"5559a223-ca10-4fe0-aaaa-07fe8360a52b","Type":"ContainerDied","Data":"60fe1a91ef3b5352fb8bef9a4cd4b833179ac9a67fa3546fded15b01377c4b47"} Nov 22 00:27:00 crc kubenswrapper[4928]: I1122 00:27:00.346506 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fe1a91ef3b5352fb8bef9a4cd4b833179ac9a67fa3546fded15b01377c4b47" Nov 22 00:27:00 crc kubenswrapper[4928]: I1122 00:27:00.346508 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 22 00:27:01 crc kubenswrapper[4928]: I1122 00:27:01.982215 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5559a223-ca10-4fe0-aaaa-07fe8360a52b" (UID: "5559a223-ca10-4fe0-aaaa-07fe8360a52b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:27:02 crc kubenswrapper[4928]: I1122 00:27:02.034963 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5559a223-ca10-4fe0-aaaa-07fe8360a52b-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.493035 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 22 00:27:04 crc kubenswrapper[4928]: E1122 00:27:04.493441 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerName="git-clone" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.493452 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerName="git-clone" Nov 22 00:27:04 crc kubenswrapper[4928]: E1122 00:27:04.493460 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerName="docker-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.493466 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerName="docker-build" Nov 22 00:27:04 crc kubenswrapper[4928]: E1122 00:27:04.493477 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerName="manage-dockerfile" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.493482 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerName="manage-dockerfile" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.493583 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="5559a223-ca10-4fe0-aaaa-07fe8360a52b" containerName="docker-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.494127 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.496053 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.496313 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.497000 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.497087 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.509643 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564041 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564088 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-push\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564110 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564126 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564141 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564173 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564196 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564222 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564246 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hzht\" (UniqueName: \"kubernetes.io/projected/fb29ad2c-9ce2-43fb-869f-317df89ea79e-kube-api-access-9hzht\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564263 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564278 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-pull\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.564294 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665318 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665402 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665432 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hzht\" (UniqueName: \"kubernetes.io/projected/fb29ad2c-9ce2-43fb-869f-317df89ea79e-kube-api-access-9hzht\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665451 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665478 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-pull\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665498 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665518 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665541 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-push\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665559 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665574 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665592 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665624 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.665988 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666037 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666063 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666125 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666238 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666433 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666474 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666708 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.666900 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.670508 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-push\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.670642 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-pull\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.681295 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hzht\" (UniqueName: \"kubernetes.io/projected/fb29ad2c-9ce2-43fb-869f-317df89ea79e-kube-api-access-9hzht\") pod \"sg-bridge-1-build\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:04 crc kubenswrapper[4928]: I1122 00:27:04.808970 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:05 crc kubenswrapper[4928]: I1122 00:27:05.001997 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 22 00:27:05 crc kubenswrapper[4928]: I1122 00:27:05.376619 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"fb29ad2c-9ce2-43fb-869f-317df89ea79e","Type":"ContainerStarted","Data":"de6d92461fdc7b3d8933aa4ce907e9b2191a1ce3b142a914665504fce18d8bba"} Nov 22 00:27:06 crc kubenswrapper[4928]: I1122 00:27:06.386721 4928 generic.go:334] "Generic (PLEG): container finished" podID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" containerID="10268f22d0e306138169015af75251c9503001f368bc42c7f8a600cb782361d1" exitCode=0 Nov 22 00:27:06 crc kubenswrapper[4928]: I1122 00:27:06.386853 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"fb29ad2c-9ce2-43fb-869f-317df89ea79e","Type":"ContainerDied","Data":"10268f22d0e306138169015af75251c9503001f368bc42c7f8a600cb782361d1"} Nov 22 00:27:07 crc kubenswrapper[4928]: I1122 00:27:07.397911 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"fb29ad2c-9ce2-43fb-869f-317df89ea79e","Type":"ContainerStarted","Data":"c45119389d33a3f0d267b4fa97e5d38b511435434cff6e219e8dfc4763901744"} Nov 22 00:27:07 crc kubenswrapper[4928]: I1122 00:27:07.428266 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.428246805 podStartE2EDuration="3.428246805s" podCreationTimestamp="2025-11-22 00:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:27:07.427518005 +0000 UTC m=+1262.715549206" watchObservedRunningTime="2025-11-22 00:27:07.428246805 +0000 UTC m=+1262.716277996" Nov 22 00:27:13 crc kubenswrapper[4928]: I1122 00:27:13.433597 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_fb29ad2c-9ce2-43fb-869f-317df89ea79e/docker-build/0.log" Nov 22 00:27:13 crc kubenswrapper[4928]: I1122 00:27:13.434491 4928 generic.go:334] "Generic (PLEG): container finished" podID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" containerID="c45119389d33a3f0d267b4fa97e5d38b511435434cff6e219e8dfc4763901744" exitCode=1 Nov 22 00:27:13 crc kubenswrapper[4928]: I1122 00:27:13.434518 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"fb29ad2c-9ce2-43fb-869f-317df89ea79e","Type":"ContainerDied","Data":"c45119389d33a3f0d267b4fa97e5d38b511435434cff6e219e8dfc4763901744"} Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.669379 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_fb29ad2c-9ce2-43fb-869f-317df89ea79e/docker-build/0.log" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.669952 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718495 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-run\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718566 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-node-pullsecrets\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718616 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-pull\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718636 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-root\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718661 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-proxy-ca-bundles\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718685 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-ca-bundles\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718720 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-system-configs\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718739 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-push\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718754 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildcachedir\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718780 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hzht\" (UniqueName: \"kubernetes.io/projected/fb29ad2c-9ce2-43fb-869f-317df89ea79e-kube-api-access-9hzht\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718778 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718841 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-blob-cache\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.718861 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildworkdir\") pod \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\" (UID: \"fb29ad2c-9ce2-43fb-869f-317df89ea79e\") " Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719042 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719564 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719645 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719818 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719830 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719840 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719847 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719887 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.720004 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.719979 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.724435 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.725924 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.727334 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb29ad2c-9ce2-43fb-869f-317df89ea79e-kube-api-access-9hzht" (OuterVolumeSpecName: "kube-api-access-9hzht") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "kube-api-access-9hzht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.796951 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.821197 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.821228 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.821240 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/fb29ad2c-9ce2-43fb-869f-317df89ea79e-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.821250 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hzht\" (UniqueName: \"kubernetes.io/projected/fb29ad2c-9ce2-43fb-869f-317df89ea79e-kube-api-access-9hzht\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.821278 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.821290 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.821300 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:14 crc kubenswrapper[4928]: I1122 00:27:14.995964 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 22 00:27:15 crc kubenswrapper[4928]: I1122 00:27:15.004981 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 22 00:27:15 crc kubenswrapper[4928]: I1122 00:27:15.056583 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "fb29ad2c-9ce2-43fb-869f-317df89ea79e" (UID: "fb29ad2c-9ce2-43fb-869f-317df89ea79e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:27:15 crc kubenswrapper[4928]: I1122 00:27:15.124980 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fb29ad2c-9ce2-43fb-869f-317df89ea79e-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:27:15 crc kubenswrapper[4928]: I1122 00:27:15.450110 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_fb29ad2c-9ce2-43fb-869f-317df89ea79e/docker-build/0.log" Nov 22 00:27:15 crc kubenswrapper[4928]: I1122 00:27:15.450822 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de6d92461fdc7b3d8933aa4ce907e9b2191a1ce3b142a914665504fce18d8bba" Nov 22 00:27:15 crc kubenswrapper[4928]: I1122 00:27:15.450963 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 22 00:27:15 crc kubenswrapper[4928]: I1122 00:27:15.627944 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" path="/var/lib/kubelet/pods/fb29ad2c-9ce2-43fb-869f-317df89ea79e/volumes" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.662453 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 22 00:27:16 crc kubenswrapper[4928]: E1122 00:27:16.662685 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" containerName="manage-dockerfile" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.662698 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" containerName="manage-dockerfile" Nov 22 00:27:16 crc kubenswrapper[4928]: E1122 00:27:16.662706 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" containerName="docker-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.662713 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" containerName="docker-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.662823 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb29ad2c-9ce2-43fb-869f-317df89ea79e" containerName="docker-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.663590 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.668306 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.668447 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.669044 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.670662 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.682853 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744203 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744284 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-push\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744311 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744343 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744390 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744414 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744430 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744467 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6flj7\" (UniqueName: \"kubernetes.io/projected/dcbaea42-d7e0-47d1-8181-df3acdc708ee-kube-api-access-6flj7\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744483 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744509 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-pull\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744536 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.744553 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.845660 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.845729 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-push\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.845751 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.845772 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.845796 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846035 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846068 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846092 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846106 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6flj7\" (UniqueName: \"kubernetes.io/projected/dcbaea42-d7e0-47d1-8181-df3acdc708ee-kube-api-access-6flj7\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846293 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846442 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-pull\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846511 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846519 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846537 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846587 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846652 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846858 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.846973 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.847090 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.847101 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.847955 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.851304 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-push\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.852153 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-pull\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.862200 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6flj7\" (UniqueName: \"kubernetes.io/projected/dcbaea42-d7e0-47d1-8181-df3acdc708ee-kube-api-access-6flj7\") pod \"sg-bridge-2-build\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:16 crc kubenswrapper[4928]: I1122 00:27:16.982289 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 22 00:27:17 crc kubenswrapper[4928]: I1122 00:27:17.181611 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 22 00:27:17 crc kubenswrapper[4928]: I1122 00:27:17.463274 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"dcbaea42-d7e0-47d1-8181-df3acdc708ee","Type":"ContainerStarted","Data":"38e564020f50657c20218ef6d23778fd4de6abae2d4ea9f80c4ba62c7b6a7794"} Nov 22 00:27:18 crc kubenswrapper[4928]: I1122 00:27:18.469481 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"dcbaea42-d7e0-47d1-8181-df3acdc708ee","Type":"ContainerStarted","Data":"19ae8adb366c8d9fecbe1b884b1bd4ae4f95012b8118344492a103a3ec46dc51"} Nov 22 00:27:19 crc kubenswrapper[4928]: I1122 00:27:19.477255 4928 generic.go:334] "Generic (PLEG): container finished" podID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerID="19ae8adb366c8d9fecbe1b884b1bd4ae4f95012b8118344492a103a3ec46dc51" exitCode=0 Nov 22 00:27:19 crc kubenswrapper[4928]: I1122 00:27:19.477308 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"dcbaea42-d7e0-47d1-8181-df3acdc708ee","Type":"ContainerDied","Data":"19ae8adb366c8d9fecbe1b884b1bd4ae4f95012b8118344492a103a3ec46dc51"} Nov 22 00:27:20 crc kubenswrapper[4928]: I1122 00:27:20.484617 4928 generic.go:334] "Generic (PLEG): container finished" podID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerID="e30a3941bd5e462a646c6e3925f913357fe86c4b3ff28a8c0519be7aaffc22e7" exitCode=0 Nov 22 00:27:20 crc kubenswrapper[4928]: I1122 00:27:20.484656 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"dcbaea42-d7e0-47d1-8181-df3acdc708ee","Type":"ContainerDied","Data":"e30a3941bd5e462a646c6e3925f913357fe86c4b3ff28a8c0519be7aaffc22e7"} Nov 22 00:27:20 crc kubenswrapper[4928]: I1122 00:27:20.520586 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_dcbaea42-d7e0-47d1-8181-df3acdc708ee/manage-dockerfile/0.log" Nov 22 00:27:21 crc kubenswrapper[4928]: I1122 00:27:21.503225 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"dcbaea42-d7e0-47d1-8181-df3acdc708ee","Type":"ContainerStarted","Data":"9af677dedc171e39059fa040d59678c99360dc448c8f24a20b9f5dd0d053d986"} Nov 22 00:27:21 crc kubenswrapper[4928]: I1122 00:27:21.568854 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.568839063 podStartE2EDuration="5.568839063s" podCreationTimestamp="2025-11-22 00:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:27:21.565212625 +0000 UTC m=+1276.853243816" watchObservedRunningTime="2025-11-22 00:27:21.568839063 +0000 UTC m=+1276.856870254" Nov 22 00:28:00 crc kubenswrapper[4928]: I1122 00:28:00.030882 4928 generic.go:334] "Generic (PLEG): container finished" podID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerID="9af677dedc171e39059fa040d59678c99360dc448c8f24a20b9f5dd0d053d986" exitCode=0 Nov 22 00:28:00 crc kubenswrapper[4928]: I1122 00:28:00.030956 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"dcbaea42-d7e0-47d1-8181-df3acdc708ee","Type":"ContainerDied","Data":"9af677dedc171e39059fa040d59678c99360dc448c8f24a20b9f5dd0d053d986"} Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.342479 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.414930 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-system-configs\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.415636 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515552 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-pull\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515594 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-run\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515612 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-blob-cache\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515631 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-proxy-ca-bundles\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515649 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildcachedir\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515677 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-ca-bundles\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515728 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515761 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-node-pullsecrets\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515821 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515842 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-root\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515866 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6flj7\" (UniqueName: \"kubernetes.io/projected/dcbaea42-d7e0-47d1-8181-df3acdc708ee-kube-api-access-6flj7\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515965 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-push\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.515981 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildworkdir\") pod \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\" (UID: \"dcbaea42-d7e0-47d1-8181-df3acdc708ee\") " Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.516312 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.516734 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.517284 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.517618 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.517655 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.517667 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcbaea42-d7e0-47d1-8181-df3acdc708ee-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.517682 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.521540 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.528282 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.533030 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbaea42-d7e0-47d1-8181-df3acdc708ee-kube-api-access-6flj7" (OuterVolumeSpecName: "kube-api-access-6flj7") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "kube-api-access-6flj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.535439 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.618886 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6flj7\" (UniqueName: \"kubernetes.io/projected/dcbaea42-d7e0-47d1-8181-df3acdc708ee-kube-api-access-6flj7\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.619214 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.619309 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/dcbaea42-d7e0-47d1-8181-df3acdc708ee-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.619385 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.619439 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.619504 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.636002 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:01 crc kubenswrapper[4928]: I1122 00:28:01.721035 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:02 crc kubenswrapper[4928]: I1122 00:28:02.047707 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"dcbaea42-d7e0-47d1-8181-df3acdc708ee","Type":"ContainerDied","Data":"38e564020f50657c20218ef6d23778fd4de6abae2d4ea9f80c4ba62c7b6a7794"} Nov 22 00:28:02 crc kubenswrapper[4928]: I1122 00:28:02.047745 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e564020f50657c20218ef6d23778fd4de6abae2d4ea9f80c4ba62c7b6a7794" Nov 22 00:28:02 crc kubenswrapper[4928]: I1122 00:28:02.047774 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 22 00:28:02 crc kubenswrapper[4928]: I1122 00:28:02.149613 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dcbaea42-d7e0-47d1-8181-df3acdc708ee" (UID: "dcbaea42-d7e0-47d1-8181-df3acdc708ee"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:02 crc kubenswrapper[4928]: I1122 00:28:02.226872 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcbaea42-d7e0-47d1-8181-df3acdc708ee-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.270097 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 22 00:28:08 crc kubenswrapper[4928]: E1122 00:28:08.270703 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerName="manage-dockerfile" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.270731 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerName="manage-dockerfile" Nov 22 00:28:08 crc kubenswrapper[4928]: E1122 00:28:08.270748 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerName="git-clone" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.270753 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerName="git-clone" Nov 22 00:28:08 crc kubenswrapper[4928]: E1122 00:28:08.270762 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerName="docker-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.270767 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerName="docker-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.270880 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbaea42-d7e0-47d1-8181-df3acdc708ee" containerName="docker-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.271462 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.273495 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.274107 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.274582 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.276903 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.287394 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319365 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319422 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319443 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319461 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319484 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319503 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319636 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319683 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfpnr\" (UniqueName: \"kubernetes.io/projected/8d9f219b-df79-459b-a00c-555f60a06de7-kube-api-access-vfpnr\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319709 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319751 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319812 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.319835 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.420625 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.420737 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.420752 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.420821 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.420871 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.420934 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.420992 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421055 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfpnr\" (UniqueName: \"kubernetes.io/projected/8d9f219b-df79-459b-a00c-555f60a06de7-kube-api-access-vfpnr\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421085 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421100 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421145 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421207 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421268 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421310 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421382 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421403 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421604 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.421658 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.422456 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.422526 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.423057 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.429418 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.434308 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.440468 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfpnr\" (UniqueName: \"kubernetes.io/projected/8d9f219b-df79-459b-a00c-555f60a06de7-kube-api-access-vfpnr\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:08 crc kubenswrapper[4928]: I1122 00:28:08.584685 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:09 crc kubenswrapper[4928]: I1122 00:28:09.025592 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 22 00:28:09 crc kubenswrapper[4928]: W1122 00:28:09.028035 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d9f219b_df79_459b_a00c_555f60a06de7.slice/crio-3a6dc6dce5545f5b705fd190e2f5e69971209ec3a5af64de1d6ec2d8a410505a WatchSource:0}: Error finding container 3a6dc6dce5545f5b705fd190e2f5e69971209ec3a5af64de1d6ec2d8a410505a: Status 404 returned error can't find the container with id 3a6dc6dce5545f5b705fd190e2f5e69971209ec3a5af64de1d6ec2d8a410505a Nov 22 00:28:09 crc kubenswrapper[4928]: I1122 00:28:09.097823 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"8d9f219b-df79-459b-a00c-555f60a06de7","Type":"ContainerStarted","Data":"3a6dc6dce5545f5b705fd190e2f5e69971209ec3a5af64de1d6ec2d8a410505a"} Nov 22 00:28:10 crc kubenswrapper[4928]: I1122 00:28:10.105206 4928 generic.go:334] "Generic (PLEG): container finished" podID="8d9f219b-df79-459b-a00c-555f60a06de7" containerID="b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc" exitCode=0 Nov 22 00:28:10 crc kubenswrapper[4928]: I1122 00:28:10.105321 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"8d9f219b-df79-459b-a00c-555f60a06de7","Type":"ContainerDied","Data":"b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc"} Nov 22 00:28:11 crc kubenswrapper[4928]: I1122 00:28:11.119330 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"8d9f219b-df79-459b-a00c-555f60a06de7","Type":"ContainerStarted","Data":"43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1"} Nov 22 00:28:11 crc kubenswrapper[4928]: I1122 00:28:11.156368 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.15633139 podStartE2EDuration="3.15633139s" podCreationTimestamp="2025-11-22 00:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:28:11.154474079 +0000 UTC m=+1326.442505280" watchObservedRunningTime="2025-11-22 00:28:11.15633139 +0000 UTC m=+1326.444362611" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.382581 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.384352 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="8d9f219b-df79-459b-a00c-555f60a06de7" containerName="docker-build" containerID="cri-o://43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1" gracePeriod=30 Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.733641 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_8d9f219b-df79-459b-a00c-555f60a06de7/docker-build/0.log" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.734711 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856146 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-build-blob-cache\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856212 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-proxy-ca-bundles\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856232 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-push\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856247 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-ca-bundles\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856271 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfpnr\" (UniqueName: \"kubernetes.io/projected/8d9f219b-df79-459b-a00c-555f60a06de7-kube-api-access-vfpnr\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856289 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-node-pullsecrets\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856336 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-pull\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856356 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-buildcachedir\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856389 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-root\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856415 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-system-configs\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856436 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-buildworkdir\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856475 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-run\") pod \"8d9f219b-df79-459b-a00c-555f60a06de7\" (UID: \"8d9f219b-df79-459b-a00c-555f60a06de7\") " Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856560 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856611 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856702 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.856719 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8d9f219b-df79-459b-a00c-555f60a06de7-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.857115 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.857199 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.857578 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.857946 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.858685 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.858857 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.860064 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.867012 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.867036 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9f219b-df79-459b-a00c-555f60a06de7-kube-api-access-vfpnr" (OuterVolumeSpecName: "kube-api-access-vfpnr") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "kube-api-access-vfpnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.870057 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "8d9f219b-df79-459b-a00c-555f60a06de7" (UID: "8d9f219b-df79-459b-a00c-555f60a06de7"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.958876 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959097 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959180 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959301 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959334 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959416 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959575 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d9f219b-df79-459b-a00c-555f60a06de7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959610 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfpnr\" (UniqueName: \"kubernetes.io/projected/8d9f219b-df79-459b-a00c-555f60a06de7-kube-api-access-vfpnr\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959638 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/8d9f219b-df79-459b-a00c-555f60a06de7-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:17 crc kubenswrapper[4928]: I1122 00:28:17.959665 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8d9f219b-df79-459b-a00c-555f60a06de7-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.169437 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_8d9f219b-df79-459b-a00c-555f60a06de7/docker-build/0.log" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.169885 4928 generic.go:334] "Generic (PLEG): container finished" podID="8d9f219b-df79-459b-a00c-555f60a06de7" containerID="43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1" exitCode=1 Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.169922 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"8d9f219b-df79-459b-a00c-555f60a06de7","Type":"ContainerDied","Data":"43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1"} Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.169952 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"8d9f219b-df79-459b-a00c-555f60a06de7","Type":"ContainerDied","Data":"3a6dc6dce5545f5b705fd190e2f5e69971209ec3a5af64de1d6ec2d8a410505a"} Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.169977 4928 scope.go:117] "RemoveContainer" containerID="43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.170000 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.197042 4928 scope.go:117] "RemoveContainer" containerID="b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.206664 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.208166 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.226350 4928 scope.go:117] "RemoveContainer" containerID="43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1" Nov 22 00:28:18 crc kubenswrapper[4928]: E1122 00:28:18.226846 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1\": container with ID starting with 43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1 not found: ID does not exist" containerID="43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.226912 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1"} err="failed to get container status \"43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1\": rpc error: code = NotFound desc = could not find container \"43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1\": container with ID starting with 43d7d74224ef7e14da976968358740fca0f4fe0a48aa644b901e31825f8cb4a1 not found: ID does not exist" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.226991 4928 scope.go:117] "RemoveContainer" containerID="b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc" Nov 22 00:28:18 crc kubenswrapper[4928]: E1122 00:28:18.227401 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc\": container with ID starting with b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc not found: ID does not exist" containerID="b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc" Nov 22 00:28:18 crc kubenswrapper[4928]: I1122 00:28:18.227442 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc"} err="failed to get container status \"b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc\": rpc error: code = NotFound desc = could not find container \"b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc\": container with ID starting with b19557c03913ff35eeb7c2fcf971a3e731eff281ee1bcf42348998347b0afabc not found: ID does not exist" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.006649 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 22 00:28:19 crc kubenswrapper[4928]: E1122 00:28:19.007203 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9f219b-df79-459b-a00c-555f60a06de7" containerName="docker-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.007230 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9f219b-df79-459b-a00c-555f60a06de7" containerName="docker-build" Nov 22 00:28:19 crc kubenswrapper[4928]: E1122 00:28:19.007273 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9f219b-df79-459b-a00c-555f60a06de7" containerName="manage-dockerfile" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.007289 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9f219b-df79-459b-a00c-555f60a06de7" containerName="manage-dockerfile" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.007529 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9f219b-df79-459b-a00c-555f60a06de7" containerName="docker-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.009464 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.012285 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.012469 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.014110 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-zp4ph" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.015948 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.030054 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072350 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072402 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxw2w\" (UniqueName: \"kubernetes.io/projected/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-kube-api-access-cxw2w\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072424 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072439 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072464 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072525 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072587 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072628 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072653 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072808 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.072894 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.073071 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174263 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174312 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxw2w\" (UniqueName: \"kubernetes.io/projected/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-kube-api-access-cxw2w\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174341 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174363 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174395 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174427 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174449 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174476 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174500 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174543 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174625 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174675 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.174874 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175060 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175126 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175188 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175231 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175358 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175408 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175765 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.175978 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.185418 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.186223 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.203969 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxw2w\" (UniqueName: \"kubernetes.io/projected/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-kube-api-access-cxw2w\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.380192 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.631279 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9f219b-df79-459b-a00c-555f60a06de7" path="/var/lib/kubelet/pods/8d9f219b-df79-459b-a00c-555f60a06de7/volumes" Nov 22 00:28:19 crc kubenswrapper[4928]: I1122 00:28:19.648891 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 22 00:28:20 crc kubenswrapper[4928]: I1122 00:28:20.195348 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed","Type":"ContainerStarted","Data":"c6f0e4ac20c13f559c4cda5225c052038a24bb2bc591e2ba14ea7a542a611133"} Nov 22 00:28:20 crc kubenswrapper[4928]: I1122 00:28:20.195628 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed","Type":"ContainerStarted","Data":"37b883235778e953924912b0aef2598f7887a7103afbf5dd6ac2ab271457e140"} Nov 22 00:28:21 crc kubenswrapper[4928]: I1122 00:28:21.203630 4928 generic.go:334] "Generic (PLEG): container finished" podID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerID="c6f0e4ac20c13f559c4cda5225c052038a24bb2bc591e2ba14ea7a542a611133" exitCode=0 Nov 22 00:28:21 crc kubenswrapper[4928]: I1122 00:28:21.203952 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed","Type":"ContainerDied","Data":"c6f0e4ac20c13f559c4cda5225c052038a24bb2bc591e2ba14ea7a542a611133"} Nov 22 00:28:22 crc kubenswrapper[4928]: I1122 00:28:22.211223 4928 generic.go:334] "Generic (PLEG): container finished" podID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerID="1acd89088bc8f1f7e85d4cf94d712f6f2a9cde4192e38cf74023fa0df4d41cf0" exitCode=0 Nov 22 00:28:22 crc kubenswrapper[4928]: I1122 00:28:22.211281 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed","Type":"ContainerDied","Data":"1acd89088bc8f1f7e85d4cf94d712f6f2a9cde4192e38cf74023fa0df4d41cf0"} Nov 22 00:28:22 crc kubenswrapper[4928]: I1122 00:28:22.246297 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed/manage-dockerfile/0.log" Nov 22 00:28:23 crc kubenswrapper[4928]: I1122 00:28:23.252550 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed","Type":"ContainerStarted","Data":"91dd289108342a5f9b4fdc7bd2326523c46be24a6fe6a620cf2876edf900cf61"} Nov 22 00:28:23 crc kubenswrapper[4928]: I1122 00:28:23.275484 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.275465612 podStartE2EDuration="5.275465612s" podCreationTimestamp="2025-11-22 00:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:28:23.27427699 +0000 UTC m=+1338.562308181" watchObservedRunningTime="2025-11-22 00:28:23.275465612 +0000 UTC m=+1338.563496803" Nov 22 00:29:06 crc kubenswrapper[4928]: I1122 00:29:06.707442 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:29:06 crc kubenswrapper[4928]: I1122 00:29:06.708059 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:29:16 crc kubenswrapper[4928]: I1122 00:29:16.613064 4928 generic.go:334] "Generic (PLEG): container finished" podID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerID="91dd289108342a5f9b4fdc7bd2326523c46be24a6fe6a620cf2876edf900cf61" exitCode=0 Nov 22 00:29:16 crc kubenswrapper[4928]: I1122 00:29:16.613178 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed","Type":"ContainerDied","Data":"91dd289108342a5f9b4fdc7bd2326523c46be24a6fe6a620cf2876edf900cf61"} Nov 22 00:29:17 crc kubenswrapper[4928]: I1122 00:29:17.926619 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022510 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-blob-cache\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022665 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-root\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022716 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxw2w\" (UniqueName: \"kubernetes.io/projected/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-kube-api-access-cxw2w\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022752 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-push\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022837 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-ca-bundles\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022905 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-node-pullsecrets\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022938 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildworkdir\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.022967 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-system-configs\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.023004 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-proxy-ca-bundles\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.023055 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-run\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.023084 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildcachedir\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.023137 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-pull\") pod \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\" (UID: \"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed\") " Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.024772 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.025225 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.025332 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.025605 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.025756 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.026154 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.029079 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.034638 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-push" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-push") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "builder-dockercfg-zp4ph-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.035728 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-pull" (OuterVolumeSpecName: "builder-dockercfg-zp4ph-pull") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "builder-dockercfg-zp4ph-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.041965 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-kube-api-access-cxw2w" (OuterVolumeSpecName: "kube-api-access-cxw2w") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "kube-api-access-cxw2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.119479 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.127980 4928 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128041 4928 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128053 4928 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128066 4928 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128078 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128088 4928 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128099 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-pull\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-pull\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128112 4928 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128123 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxw2w\" (UniqueName: \"kubernetes.io/projected/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-kube-api-access-cxw2w\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128134 4928 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-zp4ph-push\" (UniqueName: \"kubernetes.io/secret/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-builder-dockercfg-zp4ph-push\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.128145 4928 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.629525 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed","Type":"ContainerDied","Data":"37b883235778e953924912b0aef2598f7887a7103afbf5dd6ac2ab271457e140"} Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.629912 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b883235778e953924912b0aef2598f7887a7103afbf5dd6ac2ab271457e140" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.629634 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.808777 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" (UID: "a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:29:18 crc kubenswrapper[4928]: I1122 00:29:18.837957 4928 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.692329 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769"] Nov 22 00:29:23 crc kubenswrapper[4928]: E1122 00:29:23.693002 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerName="manage-dockerfile" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.693017 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerName="manage-dockerfile" Nov 22 00:29:23 crc kubenswrapper[4928]: E1122 00:29:23.693035 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerName="docker-build" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.693041 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerName="docker-build" Nov 22 00:29:23 crc kubenswrapper[4928]: E1122 00:29:23.693049 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerName="git-clone" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.693055 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerName="git-clone" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.693150 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d23dd8-8f3f-453b-a79c-2a7515ebc2ed" containerName="docker-build" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.693530 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.696106 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-dskhm" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.714552 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769"] Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.799941 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnj9t\" (UniqueName: \"kubernetes.io/projected/c1a256b5-b848-45ed-99e1-d01613414391-kube-api-access-nnj9t\") pod \"smart-gateway-operator-7c8ffb5b8-v9769\" (UID: \"c1a256b5-b848-45ed-99e1-d01613414391\") " pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.800343 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c1a256b5-b848-45ed-99e1-d01613414391-runner\") pod \"smart-gateway-operator-7c8ffb5b8-v9769\" (UID: \"c1a256b5-b848-45ed-99e1-d01613414391\") " pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.901601 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c1a256b5-b848-45ed-99e1-d01613414391-runner\") pod \"smart-gateway-operator-7c8ffb5b8-v9769\" (UID: \"c1a256b5-b848-45ed-99e1-d01613414391\") " pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.901677 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnj9t\" (UniqueName: \"kubernetes.io/projected/c1a256b5-b848-45ed-99e1-d01613414391-kube-api-access-nnj9t\") pod \"smart-gateway-operator-7c8ffb5b8-v9769\" (UID: \"c1a256b5-b848-45ed-99e1-d01613414391\") " pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.902078 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c1a256b5-b848-45ed-99e1-d01613414391-runner\") pod \"smart-gateway-operator-7c8ffb5b8-v9769\" (UID: \"c1a256b5-b848-45ed-99e1-d01613414391\") " pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:23 crc kubenswrapper[4928]: I1122 00:29:23.920577 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnj9t\" (UniqueName: \"kubernetes.io/projected/c1a256b5-b848-45ed-99e1-d01613414391-kube-api-access-nnj9t\") pod \"smart-gateway-operator-7c8ffb5b8-v9769\" (UID: \"c1a256b5-b848-45ed-99e1-d01613414391\") " pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:24 crc kubenswrapper[4928]: I1122 00:29:24.010864 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" Nov 22 00:29:24 crc kubenswrapper[4928]: I1122 00:29:24.493343 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769"] Nov 22 00:29:24 crc kubenswrapper[4928]: I1122 00:29:24.507960 4928 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 00:29:24 crc kubenswrapper[4928]: I1122 00:29:24.675854 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" event={"ID":"c1a256b5-b848-45ed-99e1-d01613414391","Type":"ContainerStarted","Data":"56e76f2d1e6a344f523e7059316a1b3b365fe2d803fb6bf7c3f7a7aefda0634e"} Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.377289 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-547448897-cv9jb"] Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.378424 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.380237 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-n575c" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.396999 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-547448897-cv9jb"] Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.486597 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/aed83cda-231a-4eb5-8640-1d190d97595c-runner\") pod \"service-telemetry-operator-547448897-cv9jb\" (UID: \"aed83cda-231a-4eb5-8640-1d190d97595c\") " pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.486677 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dhhp\" (UniqueName: \"kubernetes.io/projected/aed83cda-231a-4eb5-8640-1d190d97595c-kube-api-access-6dhhp\") pod \"service-telemetry-operator-547448897-cv9jb\" (UID: \"aed83cda-231a-4eb5-8640-1d190d97595c\") " pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.587818 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/aed83cda-231a-4eb5-8640-1d190d97595c-runner\") pod \"service-telemetry-operator-547448897-cv9jb\" (UID: \"aed83cda-231a-4eb5-8640-1d190d97595c\") " pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.587904 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dhhp\" (UniqueName: \"kubernetes.io/projected/aed83cda-231a-4eb5-8640-1d190d97595c-kube-api-access-6dhhp\") pod \"service-telemetry-operator-547448897-cv9jb\" (UID: \"aed83cda-231a-4eb5-8640-1d190d97595c\") " pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.588388 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/aed83cda-231a-4eb5-8640-1d190d97595c-runner\") pod \"service-telemetry-operator-547448897-cv9jb\" (UID: \"aed83cda-231a-4eb5-8640-1d190d97595c\") " pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.609328 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dhhp\" (UniqueName: \"kubernetes.io/projected/aed83cda-231a-4eb5-8640-1d190d97595c-kube-api-access-6dhhp\") pod \"service-telemetry-operator-547448897-cv9jb\" (UID: \"aed83cda-231a-4eb5-8640-1d190d97595c\") " pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:29 crc kubenswrapper[4928]: I1122 00:29:29.759724 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" Nov 22 00:29:32 crc kubenswrapper[4928]: I1122 00:29:32.943046 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-547448897-cv9jb"] Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.604600 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-psftr"] Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.607854 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.613713 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psftr"] Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.671538 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-catalog-content\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.671592 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb59f\" (UniqueName: \"kubernetes.io/projected/f4179d92-eb7c-4da2-9931-e976fdd7fadf-kube-api-access-fb59f\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.671816 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-utilities\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.773305 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-catalog-content\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.773396 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb59f\" (UniqueName: \"kubernetes.io/projected/f4179d92-eb7c-4da2-9931-e976fdd7fadf-kube-api-access-fb59f\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.773470 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-utilities\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.773841 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-catalog-content\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.773896 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-utilities\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.794566 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb59f\" (UniqueName: \"kubernetes.io/projected/f4179d92-eb7c-4da2-9931-e976fdd7fadf-kube-api-access-fb59f\") pod \"redhat-operators-psftr\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:34 crc kubenswrapper[4928]: I1122 00:29:34.935803 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:35 crc kubenswrapper[4928]: W1122 00:29:35.950186 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaed83cda_231a_4eb5_8640_1d190d97595c.slice/crio-b92030f660faba5385764c1cf52c8b573dbcd16c4b63a88a2b673853aaf385f4 WatchSource:0}: Error finding container b92030f660faba5385764c1cf52c8b573dbcd16c4b63a88a2b673853aaf385f4: Status 404 returned error can't find the container with id b92030f660faba5385764c1cf52c8b573dbcd16c4b63a88a2b673853aaf385f4 Nov 22 00:29:36 crc kubenswrapper[4928]: I1122 00:29:36.707662 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:29:36 crc kubenswrapper[4928]: I1122 00:29:36.708006 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:29:36 crc kubenswrapper[4928]: I1122 00:29:36.762182 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" event={"ID":"aed83cda-231a-4eb5-8640-1d190d97595c","Type":"ContainerStarted","Data":"b92030f660faba5385764c1cf52c8b573dbcd16c4b63a88a2b673853aaf385f4"} Nov 22 00:29:36 crc kubenswrapper[4928]: I1122 00:29:36.855239 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psftr"] Nov 22 00:29:38 crc kubenswrapper[4928]: W1122 00:29:38.315822 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4179d92_eb7c_4da2_9931_e976fdd7fadf.slice/crio-f537d2b3733e05e497babec0d80733b79cafd96cf8eb7f7515e25a835ccb7e9c WatchSource:0}: Error finding container f537d2b3733e05e497babec0d80733b79cafd96cf8eb7f7515e25a835ccb7e9c: Status 404 returned error can't find the container with id f537d2b3733e05e497babec0d80733b79cafd96cf8eb7f7515e25a835ccb7e9c Nov 22 00:29:38 crc kubenswrapper[4928]: E1122 00:29:38.738420 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:stable-1.5" Nov 22 00:29:38 crc kubenswrapper[4928]: E1122 00:29:38.738600 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:stable-1.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1763771359,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnj9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-7c8ffb5b8-v9769_service-telemetry(c1a256b5-b848-45ed-99e1-d01613414391): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 00:29:38 crc kubenswrapper[4928]: E1122 00:29:38.739859 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" podUID="c1a256b5-b848-45ed-99e1-d01613414391" Nov 22 00:29:38 crc kubenswrapper[4928]: I1122 00:29:38.776595 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psftr" event={"ID":"f4179d92-eb7c-4da2-9931-e976fdd7fadf","Type":"ContainerStarted","Data":"f537d2b3733e05e497babec0d80733b79cafd96cf8eb7f7515e25a835ccb7e9c"} Nov 22 00:29:38 crc kubenswrapper[4928]: E1122 00:29:38.779301 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:stable-1.5\\\"\"" pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" podUID="c1a256b5-b848-45ed-99e1-d01613414391" Nov 22 00:29:39 crc kubenswrapper[4928]: I1122 00:29:39.786612 4928 generic.go:334] "Generic (PLEG): container finished" podID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerID="4e47ecd71236c87242452de29d0c8dec5776df36a26fb93577b38a7a53303c80" exitCode=0 Nov 22 00:29:39 crc kubenswrapper[4928]: I1122 00:29:39.786857 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psftr" event={"ID":"f4179d92-eb7c-4da2-9931-e976fdd7fadf","Type":"ContainerDied","Data":"4e47ecd71236c87242452de29d0c8dec5776df36a26fb93577b38a7a53303c80"} Nov 22 00:29:40 crc kubenswrapper[4928]: I1122 00:29:40.795983 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psftr" event={"ID":"f4179d92-eb7c-4da2-9931-e976fdd7fadf","Type":"ContainerStarted","Data":"cd8e2e0f317aa4ae293c52ad9355cf2282a143e1126e96e0462701626b1caa30"} Nov 22 00:29:41 crc kubenswrapper[4928]: I1122 00:29:41.803520 4928 generic.go:334] "Generic (PLEG): container finished" podID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerID="cd8e2e0f317aa4ae293c52ad9355cf2282a143e1126e96e0462701626b1caa30" exitCode=0 Nov 22 00:29:41 crc kubenswrapper[4928]: I1122 00:29:41.803671 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psftr" event={"ID":"f4179d92-eb7c-4da2-9931-e976fdd7fadf","Type":"ContainerDied","Data":"cd8e2e0f317aa4ae293c52ad9355cf2282a143e1126e96e0462701626b1caa30"} Nov 22 00:29:44 crc kubenswrapper[4928]: I1122 00:29:44.825025 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psftr" event={"ID":"f4179d92-eb7c-4da2-9931-e976fdd7fadf","Type":"ContainerStarted","Data":"615c85a8fd45a781fdccf017860c979c7cfc2f028acd9ecb9b48e90750a33ed8"} Nov 22 00:29:44 crc kubenswrapper[4928]: I1122 00:29:44.826786 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" event={"ID":"aed83cda-231a-4eb5-8640-1d190d97595c","Type":"ContainerStarted","Data":"844730636e8427fc9b34855f29aac72735767bdc2ae4dcdab4088e3f687cd4fe"} Nov 22 00:29:44 crc kubenswrapper[4928]: I1122 00:29:44.845227 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-psftr" podStartSLOduration=6.76975475 podStartE2EDuration="10.845206741s" podCreationTimestamp="2025-11-22 00:29:34 +0000 UTC" firstStartedPulling="2025-11-22 00:29:39.790059311 +0000 UTC m=+1415.078090512" lastFinishedPulling="2025-11-22 00:29:43.865511302 +0000 UTC m=+1419.153542503" observedRunningTime="2025-11-22 00:29:44.844370858 +0000 UTC m=+1420.132402049" watchObservedRunningTime="2025-11-22 00:29:44.845206741 +0000 UTC m=+1420.133237942" Nov 22 00:29:44 crc kubenswrapper[4928]: I1122 00:29:44.885097 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-547448897-cv9jb" podStartSLOduration=7.946374927 podStartE2EDuration="15.885068211s" podCreationTimestamp="2025-11-22 00:29:29 +0000 UTC" firstStartedPulling="2025-11-22 00:29:35.960051768 +0000 UTC m=+1411.248082959" lastFinishedPulling="2025-11-22 00:29:43.898745032 +0000 UTC m=+1419.186776243" observedRunningTime="2025-11-22 00:29:44.870771764 +0000 UTC m=+1420.158802975" watchObservedRunningTime="2025-11-22 00:29:44.885068211 +0000 UTC m=+1420.173099442" Nov 22 00:29:44 crc kubenswrapper[4928]: I1122 00:29:44.936141 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:44 crc kubenswrapper[4928]: I1122 00:29:44.936230 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:45 crc kubenswrapper[4928]: I1122 00:29:45.983521 4928 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-psftr" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="registry-server" probeResult="failure" output=< Nov 22 00:29:45 crc kubenswrapper[4928]: timeout: failed to connect service ":50051" within 1s Nov 22 00:29:45 crc kubenswrapper[4928]: > Nov 22 00:29:51 crc kubenswrapper[4928]: I1122 00:29:51.872127 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" event={"ID":"c1a256b5-b848-45ed-99e1-d01613414391","Type":"ContainerStarted","Data":"534dcd7c449bb81fa570dfdcbae91b0d6e97784edc30f312f426f56d6fee3b8b"} Nov 22 00:29:51 crc kubenswrapper[4928]: I1122 00:29:51.911216 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-7c8ffb5b8-v9769" podStartSLOduration=2.37661144 podStartE2EDuration="28.911186203s" podCreationTimestamp="2025-11-22 00:29:23 +0000 UTC" firstStartedPulling="2025-11-22 00:29:24.50654655 +0000 UTC m=+1399.794577781" lastFinishedPulling="2025-11-22 00:29:51.041121353 +0000 UTC m=+1426.329152544" observedRunningTime="2025-11-22 00:29:51.90000455 +0000 UTC m=+1427.188035781" watchObservedRunningTime="2025-11-22 00:29:51.911186203 +0000 UTC m=+1427.199217434" Nov 22 00:29:54 crc kubenswrapper[4928]: I1122 00:29:54.993006 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:55 crc kubenswrapper[4928]: I1122 00:29:55.043234 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:55 crc kubenswrapper[4928]: I1122 00:29:55.236946 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psftr"] Nov 22 00:29:56 crc kubenswrapper[4928]: I1122 00:29:56.924145 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-psftr" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="registry-server" containerID="cri-o://615c85a8fd45a781fdccf017860c979c7cfc2f028acd9ecb9b48e90750a33ed8" gracePeriod=2 Nov 22 00:29:57 crc kubenswrapper[4928]: I1122 00:29:57.930426 4928 generic.go:334] "Generic (PLEG): container finished" podID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerID="615c85a8fd45a781fdccf017860c979c7cfc2f028acd9ecb9b48e90750a33ed8" exitCode=0 Nov 22 00:29:57 crc kubenswrapper[4928]: I1122 00:29:57.930576 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psftr" event={"ID":"f4179d92-eb7c-4da2-9931-e976fdd7fadf","Type":"ContainerDied","Data":"615c85a8fd45a781fdccf017860c979c7cfc2f028acd9ecb9b48e90750a33ed8"} Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.513891 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.663948 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb59f\" (UniqueName: \"kubernetes.io/projected/f4179d92-eb7c-4da2-9931-e976fdd7fadf-kube-api-access-fb59f\") pod \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.664068 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-catalog-content\") pod \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.664112 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-utilities\") pod \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\" (UID: \"f4179d92-eb7c-4da2-9931-e976fdd7fadf\") " Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.665107 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-utilities" (OuterVolumeSpecName: "utilities") pod "f4179d92-eb7c-4da2-9931-e976fdd7fadf" (UID: "f4179d92-eb7c-4da2-9931-e976fdd7fadf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.670121 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4179d92-eb7c-4da2-9931-e976fdd7fadf-kube-api-access-fb59f" (OuterVolumeSpecName: "kube-api-access-fb59f") pod "f4179d92-eb7c-4da2-9931-e976fdd7fadf" (UID: "f4179d92-eb7c-4da2-9931-e976fdd7fadf"). InnerVolumeSpecName "kube-api-access-fb59f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.765806 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.765842 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb59f\" (UniqueName: \"kubernetes.io/projected/f4179d92-eb7c-4da2-9931-e976fdd7fadf-kube-api-access-fb59f\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.780676 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4179d92-eb7c-4da2-9931-e976fdd7fadf" (UID: "f4179d92-eb7c-4da2-9931-e976fdd7fadf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.867299 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4179d92-eb7c-4da2-9931-e976fdd7fadf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.946156 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psftr" event={"ID":"f4179d92-eb7c-4da2-9931-e976fdd7fadf","Type":"ContainerDied","Data":"f537d2b3733e05e497babec0d80733b79cafd96cf8eb7f7515e25a835ccb7e9c"} Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.946207 4928 scope.go:117] "RemoveContainer" containerID="615c85a8fd45a781fdccf017860c979c7cfc2f028acd9ecb9b48e90750a33ed8" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.946213 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psftr" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.982724 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psftr"] Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.986823 4928 scope.go:117] "RemoveContainer" containerID="cd8e2e0f317aa4ae293c52ad9355cf2282a143e1126e96e0462701626b1caa30" Nov 22 00:29:59 crc kubenswrapper[4928]: I1122 00:29:59.989257 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-psftr"] Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.004870 4928 scope.go:117] "RemoveContainer" containerID="4e47ecd71236c87242452de29d0c8dec5776df36a26fb93577b38a7a53303c80" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.129044 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w"] Nov 22 00:30:00 crc kubenswrapper[4928]: E1122 00:30:00.129349 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="extract-content" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.129367 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="extract-content" Nov 22 00:30:00 crc kubenswrapper[4928]: E1122 00:30:00.129384 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="extract-utilities" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.129392 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="extract-utilities" Nov 22 00:30:00 crc kubenswrapper[4928]: E1122 00:30:00.129415 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="registry-server" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.129423 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="registry-server" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.129543 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" containerName="registry-server" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.130115 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.137612 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.138022 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.143919 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w"] Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.272512 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bjcd\" (UniqueName: \"kubernetes.io/projected/2cbde11d-4226-477c-9d6a-b3e82040c32f-kube-api-access-9bjcd\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.272682 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbde11d-4226-477c-9d6a-b3e82040c32f-config-volume\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.272830 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbde11d-4226-477c-9d6a-b3e82040c32f-secret-volume\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.373771 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbde11d-4226-477c-9d6a-b3e82040c32f-config-volume\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.373863 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbde11d-4226-477c-9d6a-b3e82040c32f-secret-volume\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.373929 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bjcd\" (UniqueName: \"kubernetes.io/projected/2cbde11d-4226-477c-9d6a-b3e82040c32f-kube-api-access-9bjcd\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.374914 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbde11d-4226-477c-9d6a-b3e82040c32f-config-volume\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.380325 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbde11d-4226-477c-9d6a-b3e82040c32f-secret-volume\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.389782 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bjcd\" (UniqueName: \"kubernetes.io/projected/2cbde11d-4226-477c-9d6a-b3e82040c32f-kube-api-access-9bjcd\") pod \"collect-profiles-29396190-sx75w\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.456304 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.671917 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w"] Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.955130 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" event={"ID":"2cbde11d-4226-477c-9d6a-b3e82040c32f","Type":"ContainerStarted","Data":"e1ba33659d217ab8f998ae6bbf33ef8c4e09993d990cfd0273747fbbfe5f07bf"} Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.955175 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" event={"ID":"2cbde11d-4226-477c-9d6a-b3e82040c32f","Type":"ContainerStarted","Data":"3f7313e4573c2802be064b25c53c0855f96574aa256f01d398e41cb0d4b90f74"} Nov 22 00:30:00 crc kubenswrapper[4928]: I1122 00:30:00.970949 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" podStartSLOduration=0.970934761 podStartE2EDuration="970.934761ms" podCreationTimestamp="2025-11-22 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:30:00.968272799 +0000 UTC m=+1436.256304010" watchObservedRunningTime="2025-11-22 00:30:00.970934761 +0000 UTC m=+1436.258965952" Nov 22 00:30:01 crc kubenswrapper[4928]: I1122 00:30:01.625198 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4179d92-eb7c-4da2-9931-e976fdd7fadf" path="/var/lib/kubelet/pods/f4179d92-eb7c-4da2-9931-e976fdd7fadf/volumes" Nov 22 00:30:01 crc kubenswrapper[4928]: I1122 00:30:01.972017 4928 generic.go:334] "Generic (PLEG): container finished" podID="2cbde11d-4226-477c-9d6a-b3e82040c32f" containerID="e1ba33659d217ab8f998ae6bbf33ef8c4e09993d990cfd0273747fbbfe5f07bf" exitCode=0 Nov 22 00:30:01 crc kubenswrapper[4928]: I1122 00:30:01.972087 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" event={"ID":"2cbde11d-4226-477c-9d6a-b3e82040c32f","Type":"ContainerDied","Data":"e1ba33659d217ab8f998ae6bbf33ef8c4e09993d990cfd0273747fbbfe5f07bf"} Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.253269 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.327331 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbde11d-4226-477c-9d6a-b3e82040c32f-config-volume\") pod \"2cbde11d-4226-477c-9d6a-b3e82040c32f\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.327375 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bjcd\" (UniqueName: \"kubernetes.io/projected/2cbde11d-4226-477c-9d6a-b3e82040c32f-kube-api-access-9bjcd\") pod \"2cbde11d-4226-477c-9d6a-b3e82040c32f\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.327408 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbde11d-4226-477c-9d6a-b3e82040c32f-secret-volume\") pod \"2cbde11d-4226-477c-9d6a-b3e82040c32f\" (UID: \"2cbde11d-4226-477c-9d6a-b3e82040c32f\") " Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.328655 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cbde11d-4226-477c-9d6a-b3e82040c32f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2cbde11d-4226-477c-9d6a-b3e82040c32f" (UID: "2cbde11d-4226-477c-9d6a-b3e82040c32f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.332517 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cbde11d-4226-477c-9d6a-b3e82040c32f-kube-api-access-9bjcd" (OuterVolumeSpecName: "kube-api-access-9bjcd") pod "2cbde11d-4226-477c-9d6a-b3e82040c32f" (UID: "2cbde11d-4226-477c-9d6a-b3e82040c32f"). InnerVolumeSpecName "kube-api-access-9bjcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.334007 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbde11d-4226-477c-9d6a-b3e82040c32f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2cbde11d-4226-477c-9d6a-b3e82040c32f" (UID: "2cbde11d-4226-477c-9d6a-b3e82040c32f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.428459 4928 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbde11d-4226-477c-9d6a-b3e82040c32f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.428499 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bjcd\" (UniqueName: \"kubernetes.io/projected/2cbde11d-4226-477c-9d6a-b3e82040c32f-kube-api-access-9bjcd\") on node \"crc\" DevicePath \"\"" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.428509 4928 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cbde11d-4226-477c-9d6a-b3e82040c32f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.990697 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" event={"ID":"2cbde11d-4226-477c-9d6a-b3e82040c32f","Type":"ContainerDied","Data":"3f7313e4573c2802be064b25c53c0855f96574aa256f01d398e41cb0d4b90f74"} Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.990822 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f7313e4573c2802be064b25c53c0855f96574aa256f01d398e41cb0d4b90f74" Nov 22 00:30:03 crc kubenswrapper[4928]: I1122 00:30:03.990848 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396190-sx75w" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.906879 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zpx48"] Nov 22 00:30:05 crc kubenswrapper[4928]: E1122 00:30:05.907440 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbde11d-4226-477c-9d6a-b3e82040c32f" containerName="collect-profiles" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.907456 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbde11d-4226-477c-9d6a-b3e82040c32f" containerName="collect-profiles" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.907609 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbde11d-4226-477c-9d6a-b3e82040c32f" containerName="collect-profiles" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.908096 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.909757 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.911603 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.911947 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.912165 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.912361 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.912619 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-8mmsf" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.916030 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zpx48"] Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.918605 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.981964 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.982033 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-config\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.982082 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.982120 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.982171 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmmqp\" (UniqueName: \"kubernetes.io/projected/ad4a716e-fa53-4438-9d94-eafc9b62af54-kube-api-access-bmmqp\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.982212 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-users\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:05 crc kubenswrapper[4928]: I1122 00:30:05.982251 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.083550 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-users\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.083625 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.083679 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.083714 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-config\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.083753 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.083775 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.083837 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmmqp\" (UniqueName: \"kubernetes.io/projected/ad4a716e-fa53-4438-9d94-eafc9b62af54-kube-api-access-bmmqp\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.084640 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-config\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.089035 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.089218 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.089312 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.091811 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.092541 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-users\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.099722 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmmqp\" (UniqueName: \"kubernetes.io/projected/ad4a716e-fa53-4438-9d94-eafc9b62af54-kube-api-access-bmmqp\") pod \"default-interconnect-68864d46cb-zpx48\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.236064 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.662330 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zpx48"] Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.707191 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.707308 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.707376 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.708437 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2313a8e1c327d21680589179d678b34e89c05b2d249408c247573805e438ec91"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:30:06 crc kubenswrapper[4928]: I1122 00:30:06.708582 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://2313a8e1c327d21680589179d678b34e89c05b2d249408c247573805e438ec91" gracePeriod=600 Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.020303 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" event={"ID":"ad4a716e-fa53-4438-9d94-eafc9b62af54","Type":"ContainerStarted","Data":"516258354e0e7df9a23058c13132ed30b4c06308508562730a10a50c41275f62"} Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.022544 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="2313a8e1c327d21680589179d678b34e89c05b2d249408c247573805e438ec91" exitCode=0 Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.022575 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"2313a8e1c327d21680589179d678b34e89c05b2d249408c247573805e438ec91"} Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.022592 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030"} Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.022607 4928 scope.go:117] "RemoveContainer" containerID="dd8868cc76de4a44c75c379e88a3922b5b51242da6b3cba52bc0cd03f7e25709" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.585915 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dx98q"] Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.587853 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.600198 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dx98q"] Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.600708 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-kube-api-access-fvl4b\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.600903 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-catalog-content\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.600959 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-utilities\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.701864 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-kube-api-access-fvl4b\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.701965 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-catalog-content\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.702160 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-utilities\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.702485 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-catalog-content\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.702520 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-utilities\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.735872 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-kube-api-access-fvl4b\") pod \"certified-operators-dx98q\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:07 crc kubenswrapper[4928]: I1122 00:30:07.935733 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:08 crc kubenswrapper[4928]: I1122 00:30:08.177705 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dx98q"] Nov 22 00:30:08 crc kubenswrapper[4928]: W1122 00:30:08.205927 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d664dbb_ac6f_4a8f_9489_afc18716ee7b.slice/crio-cb276b0d6a5d01b6953a0496eb00f5dd72a98c4de934632df3393905cdeba01b WatchSource:0}: Error finding container cb276b0d6a5d01b6953a0496eb00f5dd72a98c4de934632df3393905cdeba01b: Status 404 returned error can't find the container with id cb276b0d6a5d01b6953a0496eb00f5dd72a98c4de934632df3393905cdeba01b Nov 22 00:30:09 crc kubenswrapper[4928]: I1122 00:30:09.050258 4928 generic.go:334] "Generic (PLEG): container finished" podID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerID="1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5" exitCode=0 Nov 22 00:30:09 crc kubenswrapper[4928]: I1122 00:30:09.050361 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx98q" event={"ID":"7d664dbb-ac6f-4a8f-9489-afc18716ee7b","Type":"ContainerDied","Data":"1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5"} Nov 22 00:30:09 crc kubenswrapper[4928]: I1122 00:30:09.050576 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx98q" event={"ID":"7d664dbb-ac6f-4a8f-9489-afc18716ee7b","Type":"ContainerStarted","Data":"cb276b0d6a5d01b6953a0496eb00f5dd72a98c4de934632df3393905cdeba01b"} Nov 22 00:30:13 crc kubenswrapper[4928]: I1122 00:30:13.078153 4928 generic.go:334] "Generic (PLEG): container finished" podID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerID="779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3" exitCode=0 Nov 22 00:30:13 crc kubenswrapper[4928]: I1122 00:30:13.078304 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx98q" event={"ID":"7d664dbb-ac6f-4a8f-9489-afc18716ee7b","Type":"ContainerDied","Data":"779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3"} Nov 22 00:30:13 crc kubenswrapper[4928]: I1122 00:30:13.084026 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" event={"ID":"ad4a716e-fa53-4438-9d94-eafc9b62af54","Type":"ContainerStarted","Data":"5faef04e7fb78e50db5a9ef1c813b8a6c49f002f7347a7b87cede03dd4cc534e"} Nov 22 00:30:13 crc kubenswrapper[4928]: I1122 00:30:13.122043 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" podStartSLOduration=2.394892049 podStartE2EDuration="8.122017384s" podCreationTimestamp="2025-11-22 00:30:05 +0000 UTC" firstStartedPulling="2025-11-22 00:30:06.671834185 +0000 UTC m=+1441.959865416" lastFinishedPulling="2025-11-22 00:30:12.39895951 +0000 UTC m=+1447.686990751" observedRunningTime="2025-11-22 00:30:13.114024417 +0000 UTC m=+1448.402055628" watchObservedRunningTime="2025-11-22 00:30:13.122017384 +0000 UTC m=+1448.410048615" Nov 22 00:30:14 crc kubenswrapper[4928]: I1122 00:30:14.093501 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx98q" event={"ID":"7d664dbb-ac6f-4a8f-9489-afc18716ee7b","Type":"ContainerStarted","Data":"41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f"} Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.849456 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dx98q" podStartSLOduration=4.348452551 podStartE2EDuration="8.849431531s" podCreationTimestamp="2025-11-22 00:30:07 +0000 UTC" firstStartedPulling="2025-11-22 00:30:09.053431518 +0000 UTC m=+1444.341462709" lastFinishedPulling="2025-11-22 00:30:13.554410488 +0000 UTC m=+1448.842441689" observedRunningTime="2025-11-22 00:30:14.11865503 +0000 UTC m=+1449.406686221" watchObservedRunningTime="2025-11-22 00:30:15.849431531 +0000 UTC m=+1451.137462732" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.853522 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.855145 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.859377 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.859833 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.860181 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.860418 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.860639 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-r2d9c" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.860918 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.861861 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.862100 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Nov 22 00:30:15 crc kubenswrapper[4928]: I1122 00:30:15.885150 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.012444 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.012974 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-web-config\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.013012 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.014060 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-tls-assets\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.014157 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.014190 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-config-out\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.014227 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.014253 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbl7l\" (UniqueName: \"kubernetes.io/projected/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-kube-api-access-hbl7l\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.014288 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-config\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.014308 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.115890 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbl7l\" (UniqueName: \"kubernetes.io/projected/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-kube-api-access-hbl7l\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.115964 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-config\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.115998 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.116264 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.116333 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-web-config\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.116370 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.116407 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-tls-assets\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.116458 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.116495 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-config-out\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.116540 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.118182 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: E1122 00:30:16.123406 4928 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Nov 22 00:30:16 crc kubenswrapper[4928]: E1122 00:30:16.123562 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls podName:a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad nodeName:}" failed. No retries permitted until 2025-11-22 00:30:16.62352386 +0000 UTC m=+1451.911555111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad") : secret "default-prometheus-proxy-tls" not found Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.124734 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.128233 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-web-config\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.128288 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-config-out\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.129783 4928 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.129891 4928 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a04927b11f125b4661a8785e6454c3a1a2d8dc0a13a4fa148c0dcc4067e1b03/globalmount\"" pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.137902 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-tls-assets\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.138220 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.142028 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-config\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.149360 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbl7l\" (UniqueName: \"kubernetes.io/projected/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-kube-api-access-hbl7l\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.155621 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1110f47-f3a1-4b20-b02f-2071b51c23c7\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: I1122 00:30:16.724784 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:16 crc kubenswrapper[4928]: E1122 00:30:16.724971 4928 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Nov 22 00:30:16 crc kubenswrapper[4928]: E1122 00:30:16.725216 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls podName:a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad nodeName:}" failed. No retries permitted until 2025-11-22 00:30:17.725196088 +0000 UTC m=+1453.013227269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad") : secret "default-prometheus-proxy-tls" not found Nov 22 00:30:17 crc kubenswrapper[4928]: I1122 00:30:17.738492 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:17 crc kubenswrapper[4928]: I1122 00:30:17.745222 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad\") " pod="service-telemetry/prometheus-default-0" Nov 22 00:30:17 crc kubenswrapper[4928]: I1122 00:30:17.936938 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:17 crc kubenswrapper[4928]: I1122 00:30:17.936987 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:17 crc kubenswrapper[4928]: I1122 00:30:17.974202 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Nov 22 00:30:18 crc kubenswrapper[4928]: I1122 00:30:18.006717 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:18 crc kubenswrapper[4928]: I1122 00:30:18.179777 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:18 crc kubenswrapper[4928]: I1122 00:30:18.239992 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dx98q"] Nov 22 00:30:18 crc kubenswrapper[4928]: W1122 00:30:18.401631 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1477d3a_da77_4acc_8bd0_7f5e5cdf32ad.slice/crio-a8581f222e6fc8a1a694a4a8f8f2751412f8b1b7a625d8d0ac2684b17e6bd79e WatchSource:0}: Error finding container a8581f222e6fc8a1a694a4a8f8f2751412f8b1b7a625d8d0ac2684b17e6bd79e: Status 404 returned error can't find the container with id a8581f222e6fc8a1a694a4a8f8f2751412f8b1b7a625d8d0ac2684b17e6bd79e Nov 22 00:30:18 crc kubenswrapper[4928]: I1122 00:30:18.412695 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 22 00:30:19 crc kubenswrapper[4928]: I1122 00:30:19.137342 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad","Type":"ContainerStarted","Data":"a8581f222e6fc8a1a694a4a8f8f2751412f8b1b7a625d8d0ac2684b17e6bd79e"} Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.145062 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dx98q" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="registry-server" containerID="cri-o://41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f" gracePeriod=2 Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.586657 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.681716 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-catalog-content\") pod \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.681840 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-utilities\") pod \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.681879 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-kube-api-access-fvl4b\") pod \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\" (UID: \"7d664dbb-ac6f-4a8f-9489-afc18716ee7b\") " Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.684306 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-utilities" (OuterVolumeSpecName: "utilities") pod "7d664dbb-ac6f-4a8f-9489-afc18716ee7b" (UID: "7d664dbb-ac6f-4a8f-9489-afc18716ee7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.689389 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-kube-api-access-fvl4b" (OuterVolumeSpecName: "kube-api-access-fvl4b") pod "7d664dbb-ac6f-4a8f-9489-afc18716ee7b" (UID: "7d664dbb-ac6f-4a8f-9489-afc18716ee7b"). InnerVolumeSpecName "kube-api-access-fvl4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.734745 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d664dbb-ac6f-4a8f-9489-afc18716ee7b" (UID: "7d664dbb-ac6f-4a8f-9489-afc18716ee7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.783754 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.784298 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-kube-api-access-fvl4b\") on node \"crc\" DevicePath \"\"" Nov 22 00:30:20 crc kubenswrapper[4928]: I1122 00:30:20.784442 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d664dbb-ac6f-4a8f-9489-afc18716ee7b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.153727 4928 generic.go:334] "Generic (PLEG): container finished" podID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerID="41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f" exitCode=0 Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.154086 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx98q" event={"ID":"7d664dbb-ac6f-4a8f-9489-afc18716ee7b","Type":"ContainerDied","Data":"41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f"} Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.154116 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dx98q" event={"ID":"7d664dbb-ac6f-4a8f-9489-afc18716ee7b","Type":"ContainerDied","Data":"cb276b0d6a5d01b6953a0496eb00f5dd72a98c4de934632df3393905cdeba01b"} Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.154136 4928 scope.go:117] "RemoveContainer" containerID="41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.154291 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dx98q" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.179474 4928 scope.go:117] "RemoveContainer" containerID="779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.208465 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dx98q"] Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.215536 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dx98q"] Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.216818 4928 scope.go:117] "RemoveContainer" containerID="1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.240466 4928 scope.go:117] "RemoveContainer" containerID="41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f" Nov 22 00:30:21 crc kubenswrapper[4928]: E1122 00:30:21.241029 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f\": container with ID starting with 41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f not found: ID does not exist" containerID="41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.241130 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f"} err="failed to get container status \"41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f\": rpc error: code = NotFound desc = could not find container \"41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f\": container with ID starting with 41c02afa37b3f292eaea148c05723f6b53249be2473ebd3677a87bc841f36e7f not found: ID does not exist" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.241253 4928 scope.go:117] "RemoveContainer" containerID="779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3" Nov 22 00:30:21 crc kubenswrapper[4928]: E1122 00:30:21.241617 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3\": container with ID starting with 779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3 not found: ID does not exist" containerID="779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.241669 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3"} err="failed to get container status \"779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3\": rpc error: code = NotFound desc = could not find container \"779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3\": container with ID starting with 779356d9e33daab793fb8b8c85cdf140cd7c10bf8794b56e1d5c36f57e67a0b3 not found: ID does not exist" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.241726 4928 scope.go:117] "RemoveContainer" containerID="1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5" Nov 22 00:30:21 crc kubenswrapper[4928]: E1122 00:30:21.242220 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5\": container with ID starting with 1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5 not found: ID does not exist" containerID="1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.242246 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5"} err="failed to get container status \"1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5\": rpc error: code = NotFound desc = could not find container \"1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5\": container with ID starting with 1b7cd9ac12b998a71e716e66d698fa0a3e00808bdbec1a1b4fab1d44d24919d5 not found: ID does not exist" Nov 22 00:30:21 crc kubenswrapper[4928]: I1122 00:30:21.625938 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" path="/var/lib/kubelet/pods/7d664dbb-ac6f-4a8f-9489-afc18716ee7b/volumes" Nov 22 00:30:22 crc kubenswrapper[4928]: I1122 00:30:22.160737 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad","Type":"ContainerStarted","Data":"45fab7526b9176d26f6912cc0721f3512fc0c49c85be6ff07b7b61bcd96a029d"} Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.607716 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-8qmqr"] Nov 22 00:30:25 crc kubenswrapper[4928]: E1122 00:30:25.608672 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="extract-utilities" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.608688 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="extract-utilities" Nov 22 00:30:25 crc kubenswrapper[4928]: E1122 00:30:25.608706 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="extract-content" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.608714 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="extract-content" Nov 22 00:30:25 crc kubenswrapper[4928]: E1122 00:30:25.608727 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="registry-server" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.608735 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="registry-server" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.608875 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d664dbb-ac6f-4a8f-9489-afc18716ee7b" containerName="registry-server" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.609379 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.617107 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-8qmqr"] Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.744559 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkdg\" (UniqueName: \"kubernetes.io/projected/15489fd9-279f-4b12-9a61-54fc3a4ee40b-kube-api-access-gkkdg\") pod \"default-snmp-webhook-6856cfb745-8qmqr\" (UID: \"15489fd9-279f-4b12-9a61-54fc3a4ee40b\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.845655 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkkdg\" (UniqueName: \"kubernetes.io/projected/15489fd9-279f-4b12-9a61-54fc3a4ee40b-kube-api-access-gkkdg\") pod \"default-snmp-webhook-6856cfb745-8qmqr\" (UID: \"15489fd9-279f-4b12-9a61-54fc3a4ee40b\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.864214 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkkdg\" (UniqueName: \"kubernetes.io/projected/15489fd9-279f-4b12-9a61-54fc3a4ee40b-kube-api-access-gkkdg\") pod \"default-snmp-webhook-6856cfb745-8qmqr\" (UID: \"15489fd9-279f-4b12-9a61-54fc3a4ee40b\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" Nov 22 00:30:25 crc kubenswrapper[4928]: I1122 00:30:25.926311 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" Nov 22 00:30:26 crc kubenswrapper[4928]: I1122 00:30:26.152817 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-8qmqr"] Nov 22 00:30:26 crc kubenswrapper[4928]: W1122 00:30:26.161089 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15489fd9_279f_4b12_9a61_54fc3a4ee40b.slice/crio-365209cf248314582c267598238f4c00d0d1d0823f82b89a24f6ee8b171c514e WatchSource:0}: Error finding container 365209cf248314582c267598238f4c00d0d1d0823f82b89a24f6ee8b171c514e: Status 404 returned error can't find the container with id 365209cf248314582c267598238f4c00d0d1d0823f82b89a24f6ee8b171c514e Nov 22 00:30:26 crc kubenswrapper[4928]: I1122 00:30:26.184434 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" event={"ID":"15489fd9-279f-4b12-9a61-54fc3a4ee40b","Type":"ContainerStarted","Data":"365209cf248314582c267598238f4c00d0d1d0823f82b89a24f6ee8b171c514e"} Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.172956 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.175183 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.178198 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.178237 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.178257 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-85sn4" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.178432 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.178553 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.178670 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.193692 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.212305 4928 generic.go:334] "Generic (PLEG): container finished" podID="a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad" containerID="45fab7526b9176d26f6912cc0721f3512fc0c49c85be6ff07b7b61bcd96a029d" exitCode=0 Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.212346 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad","Type":"ContainerDied","Data":"45fab7526b9176d26f6912cc0721f3512fc0c49c85be6ff07b7b61bcd96a029d"} Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.294690 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-tls-assets\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.294754 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.294845 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5rz4\" (UniqueName: \"kubernetes.io/projected/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-kube-api-access-n5rz4\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.294893 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.294919 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-config-out\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.294989 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.295009 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.295027 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-web-config\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.295058 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-config-volume\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396614 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-web-config\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396671 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-config-volume\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396707 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-tls-assets\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396726 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396744 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5rz4\" (UniqueName: \"kubernetes.io/projected/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-kube-api-access-n5rz4\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396772 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396822 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-config-out\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396882 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.396910 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: E1122 00:30:29.397505 4928 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Nov 22 00:30:29 crc kubenswrapper[4928]: E1122 00:30:29.397586 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls podName:fcbba063-eb3a-45ff-bcfb-b50cfe532ee2 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:29.897566987 +0000 UTC m=+1465.185598278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "fcbba063-eb3a-45ff-bcfb-b50cfe532ee2") : secret "default-alertmanager-proxy-tls" not found Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.400630 4928 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.400671 4928 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b338a0ba574258ceb0debf9e02109599e1094283f5fd83179e9996008589b2d0/globalmount\"" pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.402205 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-config-out\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.402228 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-tls-assets\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.402343 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.407914 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-web-config\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.408217 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-config-volume\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.414237 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5rz4\" (UniqueName: \"kubernetes.io/projected/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-kube-api-access-n5rz4\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.419779 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.430023 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1e14342-923c-4505-a095-ac0dc6b3a552\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: I1122 00:30:29.911910 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:29 crc kubenswrapper[4928]: E1122 00:30:29.912088 4928 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Nov 22 00:30:29 crc kubenswrapper[4928]: E1122 00:30:29.912198 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls podName:fcbba063-eb3a-45ff-bcfb-b50cfe532ee2 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:30.912175233 +0000 UTC m=+1466.200206424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "fcbba063-eb3a-45ff-bcfb-b50cfe532ee2") : secret "default-alertmanager-proxy-tls" not found Nov 22 00:30:30 crc kubenswrapper[4928]: I1122 00:30:30.922987 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:30 crc kubenswrapper[4928]: E1122 00:30:30.923147 4928 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Nov 22 00:30:30 crc kubenswrapper[4928]: E1122 00:30:30.923510 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls podName:fcbba063-eb3a-45ff-bcfb-b50cfe532ee2 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:32.923490899 +0000 UTC m=+1468.211522090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "fcbba063-eb3a-45ff-bcfb-b50cfe532ee2") : secret "default-alertmanager-proxy-tls" not found Nov 22 00:30:32 crc kubenswrapper[4928]: I1122 00:30:32.949288 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:32 crc kubenswrapper[4928]: I1122 00:30:32.955778 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcbba063-eb3a-45ff-bcfb-b50cfe532ee2-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2\") " pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:33 crc kubenswrapper[4928]: I1122 00:30:33.095040 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Nov 22 00:30:33 crc kubenswrapper[4928]: I1122 00:30:33.794145 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 22 00:30:34 crc kubenswrapper[4928]: I1122 00:30:34.366430 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" event={"ID":"15489fd9-279f-4b12-9a61-54fc3a4ee40b","Type":"ContainerStarted","Data":"5411322bc328329be09abdf4b8b046d2c5946344264f4980f92101efbc4c4954"} Nov 22 00:30:34 crc kubenswrapper[4928]: I1122 00:30:34.383666 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-8qmqr" podStartSLOduration=2.951835702 podStartE2EDuration="9.383621311s" podCreationTimestamp="2025-11-22 00:30:25 +0000 UTC" firstStartedPulling="2025-11-22 00:30:26.162578685 +0000 UTC m=+1461.450609876" lastFinishedPulling="2025-11-22 00:30:32.594364294 +0000 UTC m=+1467.882395485" observedRunningTime="2025-11-22 00:30:34.378901494 +0000 UTC m=+1469.666932705" watchObservedRunningTime="2025-11-22 00:30:34.383621311 +0000 UTC m=+1469.671652502" Nov 22 00:30:35 crc kubenswrapper[4928]: W1122 00:30:35.627101 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcbba063_eb3a_45ff_bcfb_b50cfe532ee2.slice/crio-4b11e5f596f423c3e2f4a2091f25c0c26f23ccb050539409e99059c7d20b0cb0 WatchSource:0}: Error finding container 4b11e5f596f423c3e2f4a2091f25c0c26f23ccb050539409e99059c7d20b0cb0: Status 404 returned error can't find the container with id 4b11e5f596f423c3e2f4a2091f25c0c26f23ccb050539409e99059c7d20b0cb0 Nov 22 00:30:36 crc kubenswrapper[4928]: I1122 00:30:36.378329 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad","Type":"ContainerStarted","Data":"387715eddf3148520a8c2d89d53eae20ee19ac1a2df73d74ac5a82a590ff1b62"} Nov 22 00:30:36 crc kubenswrapper[4928]: I1122 00:30:36.379495 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2","Type":"ContainerStarted","Data":"4b11e5f596f423c3e2f4a2091f25c0c26f23ccb050539409e99059c7d20b0cb0"} Nov 22 00:30:38 crc kubenswrapper[4928]: I1122 00:30:38.394174 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad","Type":"ContainerStarted","Data":"c060519e1bb5c336d2ff71c6562346aab04f8ed50142546dc53ae4dcbd662649"} Nov 22 00:30:38 crc kubenswrapper[4928]: I1122 00:30:38.396584 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2","Type":"ContainerStarted","Data":"08d1f4b2d8e1ac512681d262ffd02feb7825a53c85cc2256c4f73ab1b392707c"} Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.162713 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx"] Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.164155 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.166013 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.166113 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-5t5dl" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.167717 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.169754 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx"] Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.174658 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.253509 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.253603 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/733293ab-7999-45c4-a8fe-9459eed18b33-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.253675 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.253736 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/733293ab-7999-45c4-a8fe-9459eed18b33-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.253761 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62klh\" (UniqueName: \"kubernetes.io/projected/733293ab-7999-45c4-a8fe-9459eed18b33-kube-api-access-62klh\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.354672 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.355253 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/733293ab-7999-45c4-a8fe-9459eed18b33-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: E1122 00:30:42.354889 4928 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Nov 22 00:30:42 crc kubenswrapper[4928]: E1122 00:30:42.355460 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls podName:733293ab-7999-45c4-a8fe-9459eed18b33 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:42.855437114 +0000 UTC m=+1478.143468305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" (UID: "733293ab-7999-45c4-a8fe-9459eed18b33") : secret "default-cloud1-coll-meter-proxy-tls" not found Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.355579 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.355696 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/733293ab-7999-45c4-a8fe-9459eed18b33-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.355776 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62klh\" (UniqueName: \"kubernetes.io/projected/733293ab-7999-45c4-a8fe-9459eed18b33-kube-api-access-62klh\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.355641 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/733293ab-7999-45c4-a8fe-9459eed18b33-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.357311 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/733293ab-7999-45c4-a8fe-9459eed18b33-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.362032 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.374009 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62klh\" (UniqueName: \"kubernetes.io/projected/733293ab-7999-45c4-a8fe-9459eed18b33-kube-api-access-62klh\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: I1122 00:30:42.870352 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:42 crc kubenswrapper[4928]: E1122 00:30:42.870641 4928 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Nov 22 00:30:42 crc kubenswrapper[4928]: E1122 00:30:42.870751 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls podName:733293ab-7999-45c4-a8fe-9459eed18b33 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:43.870724768 +0000 UTC m=+1479.158755969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" (UID: "733293ab-7999-45c4-a8fe-9459eed18b33") : secret "default-cloud1-coll-meter-proxy-tls" not found Nov 22 00:30:43 crc kubenswrapper[4928]: I1122 00:30:43.887516 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:43 crc kubenswrapper[4928]: I1122 00:30:43.893141 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/733293ab-7999-45c4-a8fe-9459eed18b33-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx\" (UID: \"733293ab-7999-45c4-a8fe-9459eed18b33\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:43 crc kubenswrapper[4928]: I1122 00:30:43.979965 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.437082 4928 generic.go:334] "Generic (PLEG): container finished" podID="fcbba063-eb3a-45ff-bcfb-b50cfe532ee2" containerID="08d1f4b2d8e1ac512681d262ffd02feb7825a53c85cc2256c4f73ab1b392707c" exitCode=0 Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.437463 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2","Type":"ContainerDied","Data":"08d1f4b2d8e1ac512681d262ffd02feb7825a53c85cc2256c4f73ab1b392707c"} Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.446546 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn"] Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.451041 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad","Type":"ContainerStarted","Data":"24027d474539bf46a716b5d1c2a8a938cdc88d01a116902b1106b734cc321ce5"} Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.451287 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.454059 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.454287 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.465786 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn"] Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.512420 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.62438568 podStartE2EDuration="30.512398481s" podCreationTimestamp="2025-11-22 00:30:14 +0000 UTC" firstStartedPulling="2025-11-22 00:30:18.403568297 +0000 UTC m=+1453.691599488" lastFinishedPulling="2025-11-22 00:30:44.291581098 +0000 UTC m=+1479.579612289" observedRunningTime="2025-11-22 00:30:44.507827156 +0000 UTC m=+1479.795858347" watchObservedRunningTime="2025-11-22 00:30:44.512398481 +0000 UTC m=+1479.800429672" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.595605 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.596472 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.596762 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.596909 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.596984 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvt5\" (UniqueName: \"kubernetes.io/projected/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-kube-api-access-nhvt5\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.616951 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx"] Nov 22 00:30:44 crc kubenswrapper[4928]: W1122 00:30:44.622237 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod733293ab_7999_45c4_a8fe_9459eed18b33.slice/crio-3ffb4dd92e969cee86012f68039a2c5617a9835320af76a9aece77fedec63df0 WatchSource:0}: Error finding container 3ffb4dd92e969cee86012f68039a2c5617a9835320af76a9aece77fedec63df0: Status 404 returned error can't find the container with id 3ffb4dd92e969cee86012f68039a2c5617a9835320af76a9aece77fedec63df0 Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.698171 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.698204 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhvt5\" (UniqueName: \"kubernetes.io/projected/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-kube-api-access-nhvt5\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.698264 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.698336 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.698356 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: E1122 00:30:44.698436 4928 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 22 00:30:44 crc kubenswrapper[4928]: E1122 00:30:44.698476 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls podName:c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:45.19846255 +0000 UTC m=+1480.486493741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" (UID: "c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714") : secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.698857 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.699509 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.705064 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:44 crc kubenswrapper[4928]: I1122 00:30:44.717180 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhvt5\" (UniqueName: \"kubernetes.io/projected/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-kube-api-access-nhvt5\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:45 crc kubenswrapper[4928]: I1122 00:30:45.205634 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:45 crc kubenswrapper[4928]: E1122 00:30:45.205831 4928 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 22 00:30:45 crc kubenswrapper[4928]: E1122 00:30:45.205894 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls podName:c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:46.20587693 +0000 UTC m=+1481.493908121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" (UID: "c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714") : secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 22 00:30:45 crc kubenswrapper[4928]: I1122 00:30:45.461539 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerStarted","Data":"3a363d29d49d5bed91a2c5b672d28f66b94770396cd4b76b83c444621a75ab41"} Nov 22 00:30:45 crc kubenswrapper[4928]: I1122 00:30:45.461898 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerStarted","Data":"3ffb4dd92e969cee86012f68039a2c5617a9835320af76a9aece77fedec63df0"} Nov 22 00:30:46 crc kubenswrapper[4928]: I1122 00:30:46.222489 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:46 crc kubenswrapper[4928]: I1122 00:30:46.228252 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn\" (UID: \"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:46 crc kubenswrapper[4928]: I1122 00:30:46.305900 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" Nov 22 00:30:46 crc kubenswrapper[4928]: I1122 00:30:46.779557 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn"] Nov 22 00:30:46 crc kubenswrapper[4928]: W1122 00:30:46.795703 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f6cfc6_a08e_4dfe_b7d9_8b9b0c2a4714.slice/crio-08d38377f567db5796184441b4a6c13ea05d2e77800faff9d6bb247a796ee27d WatchSource:0}: Error finding container 08d38377f567db5796184441b4a6c13ea05d2e77800faff9d6bb247a796ee27d: Status 404 returned error can't find the container with id 08d38377f567db5796184441b4a6c13ea05d2e77800faff9d6bb247a796ee27d Nov 22 00:30:47 crc kubenswrapper[4928]: I1122 00:30:47.482899 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2","Type":"ContainerStarted","Data":"c303a7dd829091a217861582e4493e3532dcb2342d40211bbf603df255ac615e"} Nov 22 00:30:47 crc kubenswrapper[4928]: I1122 00:30:47.484918 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerStarted","Data":"63544c0403bcaa1047f5579491d13aea8a5c38c440e3882c18b82ddacd2768f5"} Nov 22 00:30:47 crc kubenswrapper[4928]: I1122 00:30:47.484944 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerStarted","Data":"08d38377f567db5796184441b4a6c13ea05d2e77800faff9d6bb247a796ee27d"} Nov 22 00:30:47 crc kubenswrapper[4928]: I1122 00:30:47.974926 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Nov 22 00:30:47 crc kubenswrapper[4928]: I1122 00:30:47.975106 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.104479 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.318641 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb"] Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.319948 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.322132 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.322426 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.330527 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb"] Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.370073 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.370281 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.370374 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.370461 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27nx\" (UniqueName: \"kubernetes.io/projected/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-kube-api-access-l27nx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.370546 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.471536 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.471595 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.471622 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l27nx\" (UniqueName: \"kubernetes.io/projected/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-kube-api-access-l27nx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.471645 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.471744 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: E1122 00:30:48.471785 4928 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Nov 22 00:30:48 crc kubenswrapper[4928]: E1122 00:30:48.471862 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls podName:d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:48.971843704 +0000 UTC m=+1484.259874895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" (UID: "d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4") : secret "default-cloud1-sens-meter-proxy-tls" not found Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.472173 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.472473 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.478332 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.487280 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l27nx\" (UniqueName: \"kubernetes.io/projected/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-kube-api-access-l27nx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.495521 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2","Type":"ContainerStarted","Data":"98b319b6d1bf8f3abd41eb281ede08243d2c206f48748bb15e37943d0627bac4"} Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.533242 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Nov 22 00:30:48 crc kubenswrapper[4928]: I1122 00:30:48.980044 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:48 crc kubenswrapper[4928]: E1122 00:30:48.980218 4928 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Nov 22 00:30:48 crc kubenswrapper[4928]: E1122 00:30:48.980476 4928 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls podName:d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4 nodeName:}" failed. No retries permitted until 2025-11-22 00:30:49.980452947 +0000 UTC m=+1485.268484138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" (UID: "d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4") : secret "default-cloud1-sens-meter-proxy-tls" not found Nov 22 00:30:50 crc kubenswrapper[4928]: I1122 00:30:50.006098 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:50 crc kubenswrapper[4928]: I1122 00:30:50.015315 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb\" (UID: \"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:50 crc kubenswrapper[4928]: I1122 00:30:50.188876 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" Nov 22 00:30:51 crc kubenswrapper[4928]: I1122 00:30:51.786085 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb"] Nov 22 00:30:51 crc kubenswrapper[4928]: W1122 00:30:51.796474 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9d56bc9_a4c5_4dea_a26f_e5f4df6667f4.slice/crio-0de5d0a15ffa2d89a98b20bff1f4586e598f608922abbcbb6f2e06319cba35a6 WatchSource:0}: Error finding container 0de5d0a15ffa2d89a98b20bff1f4586e598f608922abbcbb6f2e06319cba35a6: Status 404 returned error can't find the container with id 0de5d0a15ffa2d89a98b20bff1f4586e598f608922abbcbb6f2e06319cba35a6 Nov 22 00:30:52 crc kubenswrapper[4928]: I1122 00:30:52.533572 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerStarted","Data":"0de5d0a15ffa2d89a98b20bff1f4586e598f608922abbcbb6f2e06319cba35a6"} Nov 22 00:30:52 crc kubenswrapper[4928]: I1122 00:30:52.539146 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"fcbba063-eb3a-45ff-bcfb-b50cfe532ee2","Type":"ContainerStarted","Data":"7ba0d9d6f99bb99f370a8611a4fd7a130857f124c3c0e3db2295a80b4b088397"} Nov 22 00:30:52 crc kubenswrapper[4928]: I1122 00:30:52.541089 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerStarted","Data":"6999cdcce27d0445d445bdfce6a4a39bbae3d5169c1c5edaae66fd2b0b3211cb"} Nov 22 00:30:52 crc kubenswrapper[4928]: I1122 00:30:52.544167 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerStarted","Data":"deb64d81f8fc0aa28fb2e2d36c437532529811ade8618402f5a3d14b39f6a8c7"} Nov 22 00:30:52 crc kubenswrapper[4928]: I1122 00:30:52.561899 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=17.444842885 podStartE2EDuration="24.561881671s" podCreationTimestamp="2025-11-22 00:30:28 +0000 UTC" firstStartedPulling="2025-11-22 00:30:44.439316187 +0000 UTC m=+1479.727347388" lastFinishedPulling="2025-11-22 00:30:51.556354983 +0000 UTC m=+1486.844386174" observedRunningTime="2025-11-22 00:30:52.559496876 +0000 UTC m=+1487.847528067" watchObservedRunningTime="2025-11-22 00:30:52.561881671 +0000 UTC m=+1487.849912862" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.698543 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6"] Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.700282 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.702763 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.708762 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.712851 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6"] Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.792521 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3ef7e4d-312d-4763-95e9-6245ae074e73-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.792593 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6666\" (UniqueName: \"kubernetes.io/projected/d3ef7e4d-312d-4763-95e9-6245ae074e73-kube-api-access-w6666\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.792806 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/d3ef7e4d-312d-4763-95e9-6245ae074e73-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.792999 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3ef7e4d-312d-4763-95e9-6245ae074e73-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.893927 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/d3ef7e4d-312d-4763-95e9-6245ae074e73-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.893980 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3ef7e4d-312d-4763-95e9-6245ae074e73-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.894022 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3ef7e4d-312d-4763-95e9-6245ae074e73-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.894050 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6666\" (UniqueName: \"kubernetes.io/projected/d3ef7e4d-312d-4763-95e9-6245ae074e73-kube-api-access-w6666\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.894714 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3ef7e4d-312d-4763-95e9-6245ae074e73-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.895158 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3ef7e4d-312d-4763-95e9-6245ae074e73-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.903506 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/d3ef7e4d-312d-4763-95e9-6245ae074e73-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:55 crc kubenswrapper[4928]: I1122 00:30:55.921485 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6666\" (UniqueName: \"kubernetes.io/projected/d3ef7e4d-312d-4763-95e9-6245ae074e73-kube-api-access-w6666\") pod \"default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6\" (UID: \"d3ef7e4d-312d-4763-95e9-6245ae074e73\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.015670 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.730990 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z"] Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.732025 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.735616 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.741507 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z"] Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.805950 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.806004 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.806043 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.806072 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-kube-api-access-rsl5l\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.907352 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.907651 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.907749 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-kube-api-access-rsl5l\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.907882 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.907968 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.908500 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.912864 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:56 crc kubenswrapper[4928]: I1122 00:30:56.928450 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d-kube-api-access-rsl5l\") pod \"default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z\" (UID: \"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:30:57 crc kubenswrapper[4928]: I1122 00:30:57.065160 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" Nov 22 00:31:01 crc kubenswrapper[4928]: I1122 00:31:01.257377 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z"] Nov 22 00:31:01 crc kubenswrapper[4928]: W1122 00:31:01.312535 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ef7e4d_312d_4763_95e9_6245ae074e73.slice/crio-4bda0fe468def054a26e45c8a85c0e39e7209af7c74872538a8bd73395a957c3 WatchSource:0}: Error finding container 4bda0fe468def054a26e45c8a85c0e39e7209af7c74872538a8bd73395a957c3: Status 404 returned error can't find the container with id 4bda0fe468def054a26e45c8a85c0e39e7209af7c74872538a8bd73395a957c3 Nov 22 00:31:01 crc kubenswrapper[4928]: I1122 00:31:01.313186 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6"] Nov 22 00:31:01 crc kubenswrapper[4928]: I1122 00:31:01.606582 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" event={"ID":"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d","Type":"ContainerStarted","Data":"a650478010771f0f318451c82922c362b48f4396ad8fce4d60b3782194aa5bf8"} Nov 22 00:31:01 crc kubenswrapper[4928]: I1122 00:31:01.607655 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" event={"ID":"d3ef7e4d-312d-4763-95e9-6245ae074e73","Type":"ContainerStarted","Data":"4bda0fe468def054a26e45c8a85c0e39e7209af7c74872538a8bd73395a957c3"} Nov 22 00:31:01 crc kubenswrapper[4928]: E1122 00:31:01.886812 4928 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift/origin-oauth-proxy:latest: can't talk to a V1 container registry" image="quay.io/openshift/origin-oauth-proxy:latest" Nov 22 00:31:01 crc kubenswrapper[4928]: E1122 00:31:01.886981 4928 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:oauth-proxy,Image:quay.io/openshift/origin-oauth-proxy:latest,Command:[],Args:[-https-address=:8083 -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=smart-gateway -upstream=http://localhost:8081/ -openshift-delegate-urls={\"/\": {\"namespace\": \"service-telemetry\", \"resource\": \"smartgateways\", \"group\": \"smartgateway.infra.watch\", \"verb\": \"get\"}}],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8083,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-cloud1-sens-meter-proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:session-secret,ReadOnly:false,MountPath:/etc/proxy/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l27nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb_service-telemetry(d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4): ErrImagePull: initializing source docker://quay.io/openshift/origin-oauth-proxy:latest: can't talk to a V1 container registry" logger="UnhandledError" Nov 22 00:31:10 crc kubenswrapper[4928]: I1122 00:31:10.684843 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" event={"ID":"d3ef7e4d-312d-4763-95e9-6245ae074e73","Type":"ContainerStarted","Data":"15c60643ef35366937e38148adceb0e1507768f53c18f5af75ff86e08b192630"} Nov 22 00:31:10 crc kubenswrapper[4928]: I1122 00:31:10.688149 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerStarted","Data":"6181480b235f2369f84874417b7fcae539ba7e526cd900e080e1e815abcffd61"} Nov 22 00:31:10 crc kubenswrapper[4928]: I1122 00:31:10.691628 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerStarted","Data":"df148e5d89d4d04c7801aadbc34d476576e67c4a46a1e1e4179908bcf5fec0f8"} Nov 22 00:31:10 crc kubenswrapper[4928]: I1122 00:31:10.693105 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" event={"ID":"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d","Type":"ContainerStarted","Data":"11ad6c0c300aa67bcfc554d221cb5e0d95f5799bb28edfb68f1db55fb3354542"} Nov 22 00:31:10 crc kubenswrapper[4928]: I1122 00:31:10.694603 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerStarted","Data":"56ccc3de53be2b180e0e21720e3051b4a670734e9a2a6229ed43eacc7bd18b14"} Nov 22 00:31:10 crc kubenswrapper[4928]: I1122 00:31:10.712959 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" podStartSLOduration=2.9323013270000002 podStartE2EDuration="28.712942434s" podCreationTimestamp="2025-11-22 00:30:42 +0000 UTC" firstStartedPulling="2025-11-22 00:30:44.625428748 +0000 UTC m=+1479.913459939" lastFinishedPulling="2025-11-22 00:31:10.406069855 +0000 UTC m=+1505.694101046" observedRunningTime="2025-11-22 00:31:10.710075606 +0000 UTC m=+1505.998106797" watchObservedRunningTime="2025-11-22 00:31:10.712942434 +0000 UTC m=+1506.000973625" Nov 22 00:31:10 crc kubenswrapper[4928]: I1122 00:31:10.751609 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" podStartSLOduration=3.206238237 podStartE2EDuration="26.751577992s" podCreationTimestamp="2025-11-22 00:30:44 +0000 UTC" firstStartedPulling="2025-11-22 00:30:46.802146651 +0000 UTC m=+1482.090177832" lastFinishedPulling="2025-11-22 00:31:10.347486346 +0000 UTC m=+1505.635517587" observedRunningTime="2025-11-22 00:31:10.736137663 +0000 UTC m=+1506.024168854" watchObservedRunningTime="2025-11-22 00:31:10.751577992 +0000 UTC m=+1506.039609183" Nov 22 00:31:10 crc kubenswrapper[4928]: E1122 00:31:10.922942 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ErrImagePull: \"initializing source docker://quay.io/openshift/origin-oauth-proxy:latest: can't talk to a V1 container registry\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" podUID="d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4" Nov 22 00:31:11 crc kubenswrapper[4928]: I1122 00:31:11.701794 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerStarted","Data":"1a21552e0cd6b22157f5b750c3c7215ac3b07f741557843c4a35414b4536b043"} Nov 22 00:31:11 crc kubenswrapper[4928]: E1122 00:31:11.703746 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" podUID="d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4" Nov 22 00:31:11 crc kubenswrapper[4928]: I1122 00:31:11.703943 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" event={"ID":"d3ef7e4d-312d-4763-95e9-6245ae074e73","Type":"ContainerStarted","Data":"c71cf7929b2fac66ecb4a73e8ba7d005260816feda105bfd400fa635cd96d27a"} Nov 22 00:31:11 crc kubenswrapper[4928]: I1122 00:31:11.706394 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" event={"ID":"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d","Type":"ContainerStarted","Data":"eccfc069b67865b1d67c41c38c840920cae3e39743354aa98b1b03f1ce38ca68"} Nov 22 00:31:11 crc kubenswrapper[4928]: I1122 00:31:11.734205 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" podStartSLOduration=7.259214583 podStartE2EDuration="16.734190719s" podCreationTimestamp="2025-11-22 00:30:55 +0000 UTC" firstStartedPulling="2025-11-22 00:31:01.316040296 +0000 UTC m=+1496.604071487" lastFinishedPulling="2025-11-22 00:31:10.791016432 +0000 UTC m=+1506.079047623" observedRunningTime="2025-11-22 00:31:11.731865575 +0000 UTC m=+1507.019896766" watchObservedRunningTime="2025-11-22 00:31:11.734190719 +0000 UTC m=+1507.022221910" Nov 22 00:31:11 crc kubenswrapper[4928]: I1122 00:31:11.751488 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" podStartSLOduration=6.266494439 podStartE2EDuration="15.751464317s" podCreationTimestamp="2025-11-22 00:30:56 +0000 UTC" firstStartedPulling="2025-11-22 00:31:01.260297933 +0000 UTC m=+1496.548329114" lastFinishedPulling="2025-11-22 00:31:10.745267801 +0000 UTC m=+1506.033298992" observedRunningTime="2025-11-22 00:31:11.748289271 +0000 UTC m=+1507.036320472" watchObservedRunningTime="2025-11-22 00:31:11.751464317 +0000 UTC m=+1507.039495508" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.370710 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zpx48"] Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.371037 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" podUID="ad4a716e-fa53-4438-9d94-eafc9b62af54" containerName="default-interconnect" containerID="cri-o://5faef04e7fb78e50db5a9ef1c813b8a6c49f002f7347a7b87cede03dd4cc534e" gracePeriod=30 Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.717094 4928 generic.go:334] "Generic (PLEG): container finished" podID="ad4a716e-fa53-4438-9d94-eafc9b62af54" containerID="5faef04e7fb78e50db5a9ef1c813b8a6c49f002f7347a7b87cede03dd4cc534e" exitCode=0 Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.717356 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" event={"ID":"ad4a716e-fa53-4438-9d94-eafc9b62af54","Type":"ContainerDied","Data":"5faef04e7fb78e50db5a9ef1c813b8a6c49f002f7347a7b87cede03dd4cc534e"} Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.736167 4928 generic.go:334] "Generic (PLEG): container finished" podID="d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4" containerID="56ccc3de53be2b180e0e21720e3051b4a670734e9a2a6229ed43eacc7bd18b14" exitCode=0 Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.737013 4928 scope.go:117] "RemoveContainer" containerID="56ccc3de53be2b180e0e21720e3051b4a670734e9a2a6229ed43eacc7bd18b14" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.737038 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerDied","Data":"56ccc3de53be2b180e0e21720e3051b4a670734e9a2a6229ed43eacc7bd18b14"} Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.803921 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.976773 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-ca\") pod \"ad4a716e-fa53-4438-9d94-eafc9b62af54\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.976849 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-credentials\") pod \"ad4a716e-fa53-4438-9d94-eafc9b62af54\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.976872 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmmqp\" (UniqueName: \"kubernetes.io/projected/ad4a716e-fa53-4438-9d94-eafc9b62af54-kube-api-access-bmmqp\") pod \"ad4a716e-fa53-4438-9d94-eafc9b62af54\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.976911 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-credentials\") pod \"ad4a716e-fa53-4438-9d94-eafc9b62af54\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.976970 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-config\") pod \"ad4a716e-fa53-4438-9d94-eafc9b62af54\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.977031 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-ca\") pod \"ad4a716e-fa53-4438-9d94-eafc9b62af54\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.977054 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-users\") pod \"ad4a716e-fa53-4438-9d94-eafc9b62af54\" (UID: \"ad4a716e-fa53-4438-9d94-eafc9b62af54\") " Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.977977 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "ad4a716e-fa53-4438-9d94-eafc9b62af54" (UID: "ad4a716e-fa53-4438-9d94-eafc9b62af54"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.981919 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4a716e-fa53-4438-9d94-eafc9b62af54-kube-api-access-bmmqp" (OuterVolumeSpecName: "kube-api-access-bmmqp") pod "ad4a716e-fa53-4438-9d94-eafc9b62af54" (UID: "ad4a716e-fa53-4438-9d94-eafc9b62af54"). InnerVolumeSpecName "kube-api-access-bmmqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.982554 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "ad4a716e-fa53-4438-9d94-eafc9b62af54" (UID: "ad4a716e-fa53-4438-9d94-eafc9b62af54"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.984549 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "ad4a716e-fa53-4438-9d94-eafc9b62af54" (UID: "ad4a716e-fa53-4438-9d94-eafc9b62af54"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.985961 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "ad4a716e-fa53-4438-9d94-eafc9b62af54" (UID: "ad4a716e-fa53-4438-9d94-eafc9b62af54"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:31:12 crc kubenswrapper[4928]: I1122 00:31:12.990899 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "ad4a716e-fa53-4438-9d94-eafc9b62af54" (UID: "ad4a716e-fa53-4438-9d94-eafc9b62af54"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.005954 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "ad4a716e-fa53-4438-9d94-eafc9b62af54" (UID: "ad4a716e-fa53-4438-9d94-eafc9b62af54"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.079115 4928 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.079160 4928 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.079174 4928 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.079186 4928 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-sasl-users\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.079197 4928 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.079210 4928 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/ad4a716e-fa53-4438-9d94-eafc9b62af54-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.079221 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmmqp\" (UniqueName: \"kubernetes.io/projected/ad4a716e-fa53-4438-9d94-eafc9b62af54-kube-api-access-bmmqp\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.543160 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t79jj"] Nov 22 00:31:13 crc kubenswrapper[4928]: E1122 00:31:13.543430 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4a716e-fa53-4438-9d94-eafc9b62af54" containerName="default-interconnect" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.543447 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4a716e-fa53-4438-9d94-eafc9b62af54" containerName="default-interconnect" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.543555 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4a716e-fa53-4438-9d94-eafc9b62af54" containerName="default-interconnect" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.543978 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.554016 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t79jj"] Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.687147 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-sasl-users\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.687233 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.687295 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.687354 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllw5\" (UniqueName: \"kubernetes.io/projected/10247c42-bfd9-4ff3-967f-917f79dd5ac7-kube-api-access-wllw5\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.687456 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.687480 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/10247c42-bfd9-4ff3-967f-917f79dd5ac7-sasl-config\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.687596 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.745050 4928 generic.go:334] "Generic (PLEG): container finished" podID="1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d" containerID="11ad6c0c300aa67bcfc554d221cb5e0d95f5799bb28edfb68f1db55fb3354542" exitCode=0 Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.745127 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" event={"ID":"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d","Type":"ContainerDied","Data":"11ad6c0c300aa67bcfc554d221cb5e0d95f5799bb28edfb68f1db55fb3354542"} Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.745920 4928 scope.go:117] "RemoveContainer" containerID="11ad6c0c300aa67bcfc554d221cb5e0d95f5799bb28edfb68f1db55fb3354542" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.746610 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.746628 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zpx48" event={"ID":"ad4a716e-fa53-4438-9d94-eafc9b62af54","Type":"ContainerDied","Data":"516258354e0e7df9a23058c13132ed30b4c06308508562730a10a50c41275f62"} Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.746691 4928 scope.go:117] "RemoveContainer" containerID="5faef04e7fb78e50db5a9ef1c813b8a6c49f002f7347a7b87cede03dd4cc534e" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.751971 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerStarted","Data":"16b1381f73518e43859a3363c73dad5d22ac419b55072d1af351da715bb20e7e"} Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.752029 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerStarted","Data":"e9b183855e4153b59a47efa42b67269851cd13c320d85bb427dcc9722d4ffcd1"} Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.754185 4928 generic.go:334] "Generic (PLEG): container finished" podID="d3ef7e4d-312d-4763-95e9-6245ae074e73" containerID="15c60643ef35366937e38148adceb0e1507768f53c18f5af75ff86e08b192630" exitCode=0 Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.754255 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" event={"ID":"d3ef7e4d-312d-4763-95e9-6245ae074e73","Type":"ContainerDied","Data":"15c60643ef35366937e38148adceb0e1507768f53c18f5af75ff86e08b192630"} Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.754713 4928 scope.go:117] "RemoveContainer" containerID="15c60643ef35366937e38148adceb0e1507768f53c18f5af75ff86e08b192630" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.756246 4928 generic.go:334] "Generic (PLEG): container finished" podID="733293ab-7999-45c4-a8fe-9459eed18b33" containerID="6999cdcce27d0445d445bdfce6a4a39bbae3d5169c1c5edaae66fd2b0b3211cb" exitCode=0 Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.756288 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerDied","Data":"6999cdcce27d0445d445bdfce6a4a39bbae3d5169c1c5edaae66fd2b0b3211cb"} Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.756862 4928 scope.go:117] "RemoveContainer" containerID="6999cdcce27d0445d445bdfce6a4a39bbae3d5169c1c5edaae66fd2b0b3211cb" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.766726 4928 generic.go:334] "Generic (PLEG): container finished" podID="c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714" containerID="deb64d81f8fc0aa28fb2e2d36c437532529811ade8618402f5a3d14b39f6a8c7" exitCode=0 Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.766979 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerDied","Data":"deb64d81f8fc0aa28fb2e2d36c437532529811ade8618402f5a3d14b39f6a8c7"} Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.768192 4928 scope.go:117] "RemoveContainer" containerID="deb64d81f8fc0aa28fb2e2d36c437532529811ade8618402f5a3d14b39f6a8c7" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.788866 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.788954 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.788996 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllw5\" (UniqueName: \"kubernetes.io/projected/10247c42-bfd9-4ff3-967f-917f79dd5ac7-kube-api-access-wllw5\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.789112 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.789139 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/10247c42-bfd9-4ff3-967f-917f79dd5ac7-sasl-config\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.789171 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.789198 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-sasl-users\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.791933 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/10247c42-bfd9-4ff3-967f-917f79dd5ac7-sasl-config\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.793030 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" podStartSLOduration=4.0717305679999996 podStartE2EDuration="25.793008881s" podCreationTimestamp="2025-11-22 00:30:48 +0000 UTC" firstStartedPulling="2025-11-22 00:30:51.800058946 +0000 UTC m=+1487.088090137" lastFinishedPulling="2025-11-22 00:31:13.521337259 +0000 UTC m=+1508.809368450" observedRunningTime="2025-11-22 00:31:13.782691321 +0000 UTC m=+1509.070722512" watchObservedRunningTime="2025-11-22 00:31:13.793008881 +0000 UTC m=+1509.081040102" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.798190 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-sasl-users\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.800115 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.808920 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.854288 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.855190 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/10247c42-bfd9-4ff3-967f-917f79dd5ac7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.884279 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllw5\" (UniqueName: \"kubernetes.io/projected/10247c42-bfd9-4ff3-967f-917f79dd5ac7-kube-api-access-wllw5\") pod \"default-interconnect-68864d46cb-t79jj\" (UID: \"10247c42-bfd9-4ff3-967f-917f79dd5ac7\") " pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.895297 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zpx48"] Nov 22 00:31:13 crc kubenswrapper[4928]: I1122 00:31:13.923638 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zpx48"] Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.170037 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-t79jj" Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.627187 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-t79jj"] Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.790516 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" event={"ID":"d3ef7e4d-312d-4763-95e9-6245ae074e73","Type":"ContainerStarted","Data":"014507d2fe5a8f9e2033386bdab21c341f820cdad435c8fc1ce7478cba80749b"} Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.792943 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" event={"ID":"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d","Type":"ContainerStarted","Data":"a18d24b4952a9d6128ea0104494a99234c2d00c61c3e205df1154169d9dd4fef"} Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.794901 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t79jj" event={"ID":"10247c42-bfd9-4ff3-967f-917f79dd5ac7","Type":"ContainerStarted","Data":"a48298bde873816eff7772135ae1d2be582887c78e29817970e945e9f493e400"} Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.798712 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerStarted","Data":"3287006ad5d692122e48958b9f08f8147a9a29cd8791160c2e9c439ec9b13c38"} Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.801184 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerStarted","Data":"b642957427a59a81dad7deb4d2058df2d2cba1a094bb624222ddbad98f1212c5"} Nov 22 00:31:14 crc kubenswrapper[4928]: I1122 00:31:14.902914 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-t79jj" podStartSLOduration=2.902894811 podStartE2EDuration="2.902894811s" podCreationTimestamp="2025-11-22 00:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:31:14.897685409 +0000 UTC m=+1510.185716610" watchObservedRunningTime="2025-11-22 00:31:14.902894811 +0000 UTC m=+1510.190926002" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.628224 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad4a716e-fa53-4438-9d94-eafc9b62af54" path="/var/lib/kubelet/pods/ad4a716e-fa53-4438-9d94-eafc9b62af54/volumes" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.809183 4928 generic.go:334] "Generic (PLEG): container finished" podID="1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d" containerID="a18d24b4952a9d6128ea0104494a99234c2d00c61c3e205df1154169d9dd4fef" exitCode=0 Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.809243 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" event={"ID":"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d","Type":"ContainerDied","Data":"a18d24b4952a9d6128ea0104494a99234c2d00c61c3e205df1154169d9dd4fef"} Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.809277 4928 scope.go:117] "RemoveContainer" containerID="11ad6c0c300aa67bcfc554d221cb5e0d95f5799bb28edfb68f1db55fb3354542" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.809714 4928 scope.go:117] "RemoveContainer" containerID="a18d24b4952a9d6128ea0104494a99234c2d00c61c3e205df1154169d9dd4fef" Nov 22 00:31:15 crc kubenswrapper[4928]: E1122 00:31:15.809925 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z_service-telemetry(1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" podUID="1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.811476 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-t79jj" event={"ID":"10247c42-bfd9-4ff3-967f-917f79dd5ac7","Type":"ContainerStarted","Data":"f708d262e594cd712e2b6c31df3be362382425dc3e7803c34eb740b9594e3523"} Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.813445 4928 generic.go:334] "Generic (PLEG): container finished" podID="d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4" containerID="16b1381f73518e43859a3363c73dad5d22ac419b55072d1af351da715bb20e7e" exitCode=0 Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.813680 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerDied","Data":"16b1381f73518e43859a3363c73dad5d22ac419b55072d1af351da715bb20e7e"} Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.813963 4928 scope.go:117] "RemoveContainer" containerID="16b1381f73518e43859a3363c73dad5d22ac419b55072d1af351da715bb20e7e" Nov 22 00:31:15 crc kubenswrapper[4928]: E1122 00:31:15.814593 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb_service-telemetry(d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" podUID="d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.818425 4928 generic.go:334] "Generic (PLEG): container finished" podID="733293ab-7999-45c4-a8fe-9459eed18b33" containerID="3287006ad5d692122e48958b9f08f8147a9a29cd8791160c2e9c439ec9b13c38" exitCode=0 Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.818511 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerDied","Data":"3287006ad5d692122e48958b9f08f8147a9a29cd8791160c2e9c439ec9b13c38"} Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.819053 4928 scope.go:117] "RemoveContainer" containerID="3287006ad5d692122e48958b9f08f8147a9a29cd8791160c2e9c439ec9b13c38" Nov 22 00:31:15 crc kubenswrapper[4928]: E1122 00:31:15.819265 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx_service-telemetry(733293ab-7999-45c4-a8fe-9459eed18b33)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" podUID="733293ab-7999-45c4-a8fe-9459eed18b33" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.820558 4928 generic.go:334] "Generic (PLEG): container finished" podID="c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714" containerID="b642957427a59a81dad7deb4d2058df2d2cba1a094bb624222ddbad98f1212c5" exitCode=0 Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.820602 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerDied","Data":"b642957427a59a81dad7deb4d2058df2d2cba1a094bb624222ddbad98f1212c5"} Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.820923 4928 scope.go:117] "RemoveContainer" containerID="b642957427a59a81dad7deb4d2058df2d2cba1a094bb624222ddbad98f1212c5" Nov 22 00:31:15 crc kubenswrapper[4928]: E1122 00:31:15.821071 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn_service-telemetry(c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" podUID="c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.824581 4928 generic.go:334] "Generic (PLEG): container finished" podID="d3ef7e4d-312d-4763-95e9-6245ae074e73" containerID="014507d2fe5a8f9e2033386bdab21c341f820cdad435c8fc1ce7478cba80749b" exitCode=0 Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.824619 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" event={"ID":"d3ef7e4d-312d-4763-95e9-6245ae074e73","Type":"ContainerDied","Data":"014507d2fe5a8f9e2033386bdab21c341f820cdad435c8fc1ce7478cba80749b"} Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.825026 4928 scope.go:117] "RemoveContainer" containerID="014507d2fe5a8f9e2033386bdab21c341f820cdad435c8fc1ce7478cba80749b" Nov 22 00:31:15 crc kubenswrapper[4928]: E1122 00:31:15.825195 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6_service-telemetry(d3ef7e4d-312d-4763-95e9-6245ae074e73)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" podUID="d3ef7e4d-312d-4763-95e9-6245ae074e73" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.859896 4928 scope.go:117] "RemoveContainer" containerID="56ccc3de53be2b180e0e21720e3051b4a670734e9a2a6229ed43eacc7bd18b14" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.903336 4928 scope.go:117] "RemoveContainer" containerID="6999cdcce27d0445d445bdfce6a4a39bbae3d5169c1c5edaae66fd2b0b3211cb" Nov 22 00:31:15 crc kubenswrapper[4928]: I1122 00:31:15.953402 4928 scope.go:117] "RemoveContainer" containerID="deb64d81f8fc0aa28fb2e2d36c437532529811ade8618402f5a3d14b39f6a8c7" Nov 22 00:31:16 crc kubenswrapper[4928]: I1122 00:31:16.010027 4928 scope.go:117] "RemoveContainer" containerID="15c60643ef35366937e38148adceb0e1507768f53c18f5af75ff86e08b192630" Nov 22 00:31:26 crc kubenswrapper[4928]: I1122 00:31:26.617885 4928 scope.go:117] "RemoveContainer" containerID="16b1381f73518e43859a3363c73dad5d22ac419b55072d1af351da715bb20e7e" Nov 22 00:31:26 crc kubenswrapper[4928]: I1122 00:31:26.913870 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb" event={"ID":"d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4","Type":"ContainerStarted","Data":"aeeef707675d0d71325578eeb92b5e26e155c5e6b4c64b726ce8f537da728c55"} Nov 22 00:31:29 crc kubenswrapper[4928]: I1122 00:31:29.618275 4928 scope.go:117] "RemoveContainer" containerID="b642957427a59a81dad7deb4d2058df2d2cba1a094bb624222ddbad98f1212c5" Nov 22 00:31:29 crc kubenswrapper[4928]: I1122 00:31:29.618690 4928 scope.go:117] "RemoveContainer" containerID="3287006ad5d692122e48958b9f08f8147a9a29cd8791160c2e9c439ec9b13c38" Nov 22 00:31:29 crc kubenswrapper[4928]: I1122 00:31:29.935403 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn" event={"ID":"c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714","Type":"ContainerStarted","Data":"9760a3a82ed58d9a2a713360b0667c5e468382607cfa3c0d991adcbdce659daf"} Nov 22 00:31:29 crc kubenswrapper[4928]: I1122 00:31:29.937906 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx" event={"ID":"733293ab-7999-45c4-a8fe-9459eed18b33","Type":"ContainerStarted","Data":"513d8c17a55d52c8c2dd9e344fdea389bed8877e9ef064b4577d83a73299d477"} Nov 22 00:31:30 crc kubenswrapper[4928]: I1122 00:31:30.618224 4928 scope.go:117] "RemoveContainer" containerID="a18d24b4952a9d6128ea0104494a99234c2d00c61c3e205df1154169d9dd4fef" Nov 22 00:31:30 crc kubenswrapper[4928]: I1122 00:31:30.953754 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z" event={"ID":"1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d","Type":"ContainerStarted","Data":"d58bc90652f5749b8b6aa6cc1cc9a0c9bc1a2d18a09c15faf50fed7699656313"} Nov 22 00:31:31 crc kubenswrapper[4928]: I1122 00:31:31.618585 4928 scope.go:117] "RemoveContainer" containerID="014507d2fe5a8f9e2033386bdab21c341f820cdad435c8fc1ce7478cba80749b" Nov 22 00:31:31 crc kubenswrapper[4928]: I1122 00:31:31.964309 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6" event={"ID":"d3ef7e4d-312d-4763-95e9-6245ae074e73","Type":"ContainerStarted","Data":"38d0894cacf8bd7cea275d0c48e303cf2c7d6ed0dd4de75bc9756645c78eed7c"} Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.031529 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.033092 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.036470 4928 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.037936 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.053841 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.115922 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-qdr-test-config\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.115980 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg594\" (UniqueName: \"kubernetes.io/projected/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-kube-api-access-vg594\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.116071 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.217387 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.217486 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-qdr-test-config\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.217512 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg594\" (UniqueName: \"kubernetes.io/projected/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-kube-api-access-vg594\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.218508 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-qdr-test-config\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.228866 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.232846 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg594\" (UniqueName: \"kubernetes.io/projected/23cb0972-ff54-4c17-9721-4dd1ec13f9a4-kube-api-access-vg594\") pod \"qdr-test\" (UID: \"23cb0972-ff54-4c17-9721-4dd1ec13f9a4\") " pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.350638 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Nov 22 00:31:42 crc kubenswrapper[4928]: I1122 00:31:42.740202 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Nov 22 00:31:43 crc kubenswrapper[4928]: I1122 00:31:43.051727 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"23cb0972-ff54-4c17-9721-4dd1ec13f9a4","Type":"ContainerStarted","Data":"ec0c213e64dc42b3a609d9b83a05bbbf0a3ca4b061a350a32d34aa6b3c33d37d"} Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.101016 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"23cb0972-ff54-4c17-9721-4dd1ec13f9a4","Type":"ContainerStarted","Data":"435f03bb3553c0e46b260e1951d1317cb13f1b85cca01404be3d6d2e33afb4ac"} Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.111553 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=0.960487613 podStartE2EDuration="8.111532621s" podCreationTimestamp="2025-11-22 00:31:42 +0000 UTC" firstStartedPulling="2025-11-22 00:31:42.764298708 +0000 UTC m=+1538.052329889" lastFinishedPulling="2025-11-22 00:31:49.915343706 +0000 UTC m=+1545.203374897" observedRunningTime="2025-11-22 00:31:50.110327898 +0000 UTC m=+1545.398359089" watchObservedRunningTime="2025-11-22 00:31:50.111532621 +0000 UTC m=+1545.399563822" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.383522 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-sj5k9"] Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.384475 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.386605 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.386641 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.387505 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.388671 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.388701 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.388730 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.394651 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-sj5k9"] Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.464568 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-sensubility-config\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.464634 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.464655 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.464711 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-config\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.464731 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-publisher\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.464749 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8gvz\" (UniqueName: \"kubernetes.io/projected/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-kube-api-access-m8gvz\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.464814 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-healthcheck-log\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.566765 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.567110 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.567163 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-config\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.567188 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-publisher\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.567216 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8gvz\" (UniqueName: \"kubernetes.io/projected/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-kube-api-access-m8gvz\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.567277 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-healthcheck-log\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.567352 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-sensubility-config\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.567896 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.568020 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.568131 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-sensubility-config\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.568208 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-publisher\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.568460 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-config\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.568579 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-healthcheck-log\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.585629 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8gvz\" (UniqueName: \"kubernetes.io/projected/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-kube-api-access-m8gvz\") pod \"stf-smoketest-smoke1-sj5k9\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.700205 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.788498 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.790966 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.797226 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.870729 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b758b\" (UniqueName: \"kubernetes.io/projected/0449afb7-1ff3-486f-b8f6-230968510ceb-kube-api-access-b758b\") pod \"curl\" (UID: \"0449afb7-1ff3-486f-b8f6-230968510ceb\") " pod="service-telemetry/curl" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.972425 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b758b\" (UniqueName: \"kubernetes.io/projected/0449afb7-1ff3-486f-b8f6-230968510ceb-kube-api-access-b758b\") pod \"curl\" (UID: \"0449afb7-1ff3-486f-b8f6-230968510ceb\") " pod="service-telemetry/curl" Nov 22 00:31:50 crc kubenswrapper[4928]: I1122 00:31:50.998246 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b758b\" (UniqueName: \"kubernetes.io/projected/0449afb7-1ff3-486f-b8f6-230968510ceb-kube-api-access-b758b\") pod \"curl\" (UID: \"0449afb7-1ff3-486f-b8f6-230968510ceb\") " pod="service-telemetry/curl" Nov 22 00:31:51 crc kubenswrapper[4928]: I1122 00:31:51.118315 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 22 00:31:51 crc kubenswrapper[4928]: I1122 00:31:51.183084 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-sj5k9"] Nov 22 00:31:51 crc kubenswrapper[4928]: I1122 00:31:51.335070 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Nov 22 00:31:51 crc kubenswrapper[4928]: W1122 00:31:51.349146 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0449afb7_1ff3_486f_b8f6_230968510ceb.slice/crio-1675f93c42f440c91d79193def4a9d6d0ef9d15c8caddf82041d13d64d4c96cd WatchSource:0}: Error finding container 1675f93c42f440c91d79193def4a9d6d0ef9d15c8caddf82041d13d64d4c96cd: Status 404 returned error can't find the container with id 1675f93c42f440c91d79193def4a9d6d0ef9d15c8caddf82041d13d64d4c96cd Nov 22 00:31:52 crc kubenswrapper[4928]: I1122 00:31:52.118052 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"0449afb7-1ff3-486f-b8f6-230968510ceb","Type":"ContainerStarted","Data":"1675f93c42f440c91d79193def4a9d6d0ef9d15c8caddf82041d13d64d4c96cd"} Nov 22 00:31:52 crc kubenswrapper[4928]: I1122 00:31:52.119770 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" event={"ID":"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc","Type":"ContainerStarted","Data":"3af32d77e5794567afc25d3135aeb3f3f78ddfa1067a48c61c33f587424f40bd"} Nov 22 00:31:53 crc kubenswrapper[4928]: I1122 00:31:53.128222 4928 generic.go:334] "Generic (PLEG): container finished" podID="0449afb7-1ff3-486f-b8f6-230968510ceb" containerID="5ef7bdeda587e8a229fb18b95b809b5b1a5039bed6474a1bbf7723ff42a7a2db" exitCode=0 Nov 22 00:31:53 crc kubenswrapper[4928]: I1122 00:31:53.128272 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"0449afb7-1ff3-486f-b8f6-230968510ceb","Type":"ContainerDied","Data":"5ef7bdeda587e8a229fb18b95b809b5b1a5039bed6474a1bbf7723ff42a7a2db"} Nov 22 00:31:54 crc kubenswrapper[4928]: I1122 00:31:54.387668 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 22 00:31:54 crc kubenswrapper[4928]: I1122 00:31:54.423447 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b758b\" (UniqueName: \"kubernetes.io/projected/0449afb7-1ff3-486f-b8f6-230968510ceb-kube-api-access-b758b\") pod \"0449afb7-1ff3-486f-b8f6-230968510ceb\" (UID: \"0449afb7-1ff3-486f-b8f6-230968510ceb\") " Nov 22 00:31:54 crc kubenswrapper[4928]: I1122 00:31:54.429191 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0449afb7-1ff3-486f-b8f6-230968510ceb-kube-api-access-b758b" (OuterVolumeSpecName: "kube-api-access-b758b") pod "0449afb7-1ff3-486f-b8f6-230968510ceb" (UID: "0449afb7-1ff3-486f-b8f6-230968510ceb"). InnerVolumeSpecName "kube-api-access-b758b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:31:54 crc kubenswrapper[4928]: I1122 00:31:54.525317 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b758b\" (UniqueName: \"kubernetes.io/projected/0449afb7-1ff3-486f-b8f6-230968510ceb-kube-api-access-b758b\") on node \"crc\" DevicePath \"\"" Nov 22 00:31:54 crc kubenswrapper[4928]: I1122 00:31:54.588437 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_0449afb7-1ff3-486f-b8f6-230968510ceb/curl/0.log" Nov 22 00:31:54 crc kubenswrapper[4928]: I1122 00:31:54.868023 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-8qmqr_15489fd9-279f-4b12-9a61-54fc3a4ee40b/prometheus-webhook-snmp/0.log" Nov 22 00:31:55 crc kubenswrapper[4928]: I1122 00:31:55.145011 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"0449afb7-1ff3-486f-b8f6-230968510ceb","Type":"ContainerDied","Data":"1675f93c42f440c91d79193def4a9d6d0ef9d15c8caddf82041d13d64d4c96cd"} Nov 22 00:31:55 crc kubenswrapper[4928]: I1122 00:31:55.145054 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1675f93c42f440c91d79193def4a9d6d0ef9d15c8caddf82041d13d64d4c96cd" Nov 22 00:31:55 crc kubenswrapper[4928]: I1122 00:31:55.145101 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 22 00:32:03 crc kubenswrapper[4928]: I1122 00:32:03.199272 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" event={"ID":"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc","Type":"ContainerStarted","Data":"af51ef3efa148f2f5c766de317ce94eb8e6332e19c2703c081c8c9cecc8d2166"} Nov 22 00:32:06 crc kubenswrapper[4928]: I1122 00:32:06.707127 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:32:06 crc kubenswrapper[4928]: I1122 00:32:06.707639 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:32:11 crc kubenswrapper[4928]: I1122 00:32:11.251411 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" event={"ID":"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc","Type":"ContainerStarted","Data":"3b81d59c8a13cc8de5cf5810481fe1116033bdebce1db563fe08fb1885269960"} Nov 22 00:32:11 crc kubenswrapper[4928]: I1122 00:32:11.282138 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" podStartSLOduration=2.0243262 podStartE2EDuration="21.282108678s" podCreationTimestamp="2025-11-22 00:31:50 +0000 UTC" firstStartedPulling="2025-11-22 00:31:51.191845258 +0000 UTC m=+1546.479876449" lastFinishedPulling="2025-11-22 00:32:10.449627736 +0000 UTC m=+1565.737658927" observedRunningTime="2025-11-22 00:32:11.27223448 +0000 UTC m=+1566.560265681" watchObservedRunningTime="2025-11-22 00:32:11.282108678 +0000 UTC m=+1566.570139899" Nov 22 00:32:25 crc kubenswrapper[4928]: I1122 00:32:25.057230 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-8qmqr_15489fd9-279f-4b12-9a61-54fc3a4ee40b/prometheus-webhook-snmp/0.log" Nov 22 00:32:36 crc kubenswrapper[4928]: I1122 00:32:36.433533 4928 generic.go:334] "Generic (PLEG): container finished" podID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerID="af51ef3efa148f2f5c766de317ce94eb8e6332e19c2703c081c8c9cecc8d2166" exitCode=1 Nov 22 00:32:36 crc kubenswrapper[4928]: I1122 00:32:36.433595 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" event={"ID":"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc","Type":"ContainerDied","Data":"af51ef3efa148f2f5c766de317ce94eb8e6332e19c2703c081c8c9cecc8d2166"} Nov 22 00:32:36 crc kubenswrapper[4928]: I1122 00:32:36.434471 4928 scope.go:117] "RemoveContainer" containerID="af51ef3efa148f2f5c766de317ce94eb8e6332e19c2703c081c8c9cecc8d2166" Nov 22 00:32:36 crc kubenswrapper[4928]: I1122 00:32:36.708018 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:32:36 crc kubenswrapper[4928]: I1122 00:32:36.708095 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:32:42 crc kubenswrapper[4928]: I1122 00:32:42.478863 4928 generic.go:334] "Generic (PLEG): container finished" podID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerID="3b81d59c8a13cc8de5cf5810481fe1116033bdebce1db563fe08fb1885269960" exitCode=0 Nov 22 00:32:42 crc kubenswrapper[4928]: I1122 00:32:42.478963 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" event={"ID":"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc","Type":"ContainerDied","Data":"3b81d59c8a13cc8de5cf5810481fe1116033bdebce1db563fe08fb1885269960"} Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.786197 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.876493 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-config\") pod \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.876598 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8gvz\" (UniqueName: \"kubernetes.io/projected/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-kube-api-access-m8gvz\") pod \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.876757 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-publisher\") pod \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.876808 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-entrypoint-script\") pod \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.876833 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-entrypoint-script\") pod \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.876890 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-healthcheck-log\") pod \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.877839 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-sensubility-config\") pod \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\" (UID: \"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc\") " Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.893975 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-kube-api-access-m8gvz" (OuterVolumeSpecName: "kube-api-access-m8gvz") pod "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" (UID: "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc"). InnerVolumeSpecName "kube-api-access-m8gvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.902307 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" (UID: "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.907542 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" (UID: "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.910239 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bdr9x"] Nov 22 00:32:43 crc kubenswrapper[4928]: E1122 00:32:43.910481 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerName="smoketest-collectd" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.910497 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerName="smoketest-collectd" Nov 22 00:32:43 crc kubenswrapper[4928]: E1122 00:32:43.910517 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0449afb7-1ff3-486f-b8f6-230968510ceb" containerName="curl" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.910523 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="0449afb7-1ff3-486f-b8f6-230968510ceb" containerName="curl" Nov 22 00:32:43 crc kubenswrapper[4928]: E1122 00:32:43.910539 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerName="smoketest-ceilometer" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.910545 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerName="smoketest-ceilometer" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.910658 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerName="smoketest-ceilometer" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.910671 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="0449afb7-1ff3-486f-b8f6-230968510ceb" containerName="curl" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.910681 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" containerName="smoketest-collectd" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.911581 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.920345 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" (UID: "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.920757 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" (UID: "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.938041 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" (UID: "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.940335 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdr9x"] Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.942736 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc" (UID: "1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.978979 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-catalog-content\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979052 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-utilities\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979085 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fvn\" (UniqueName: \"kubernetes.io/projected/37bf28c5-f277-4d11-aa26-3be7828432ee-kube-api-access-69fvn\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979194 4928 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979213 4928 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979223 4928 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979232 4928 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-healthcheck-log\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979243 4928 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-sensubility-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979251 4928 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-collectd-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:43 crc kubenswrapper[4928]: I1122 00:32:43.979259 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8gvz\" (UniqueName: \"kubernetes.io/projected/1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc-kube-api-access-m8gvz\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.080072 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-catalog-content\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.080131 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-utilities\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.080159 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fvn\" (UniqueName: \"kubernetes.io/projected/37bf28c5-f277-4d11-aa26-3be7828432ee-kube-api-access-69fvn\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.080659 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-utilities\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.080846 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-catalog-content\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.097337 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fvn\" (UniqueName: \"kubernetes.io/projected/37bf28c5-f277-4d11-aa26-3be7828432ee-kube-api-access-69fvn\") pod \"community-operators-bdr9x\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.287335 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.509610 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" event={"ID":"1d0b2f0d-f7bc-42e7-ace4-fe5a188c71fc","Type":"ContainerDied","Data":"3af32d77e5794567afc25d3135aeb3f3f78ddfa1067a48c61c33f587424f40bd"} Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.509649 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3af32d77e5794567afc25d3135aeb3f3f78ddfa1067a48c61c33f587424f40bd" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.509713 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-sj5k9" Nov 22 00:32:44 crc kubenswrapper[4928]: I1122 00:32:44.772075 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdr9x"] Nov 22 00:32:44 crc kubenswrapper[4928]: W1122 00:32:44.777785 4928 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37bf28c5_f277_4d11_aa26_3be7828432ee.slice/crio-e0584abb30b2935427f6cee25eccc0cacc0988fc235480e3199d64130706c048 WatchSource:0}: Error finding container e0584abb30b2935427f6cee25eccc0cacc0988fc235480e3199d64130706c048: Status 404 returned error can't find the container with id e0584abb30b2935427f6cee25eccc0cacc0988fc235480e3199d64130706c048 Nov 22 00:32:45 crc kubenswrapper[4928]: I1122 00:32:45.517708 4928 generic.go:334] "Generic (PLEG): container finished" podID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerID="f426253cee2718948de40acd398674da12f96dc43b71ccc1d6d7a000e3feb700" exitCode=0 Nov 22 00:32:45 crc kubenswrapper[4928]: I1122 00:32:45.517753 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdr9x" event={"ID":"37bf28c5-f277-4d11-aa26-3be7828432ee","Type":"ContainerDied","Data":"f426253cee2718948de40acd398674da12f96dc43b71ccc1d6d7a000e3feb700"} Nov 22 00:32:45 crc kubenswrapper[4928]: I1122 00:32:45.517978 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdr9x" event={"ID":"37bf28c5-f277-4d11-aa26-3be7828432ee","Type":"ContainerStarted","Data":"e0584abb30b2935427f6cee25eccc0cacc0988fc235480e3199d64130706c048"} Nov 22 00:32:46 crc kubenswrapper[4928]: I1122 00:32:46.530188 4928 generic.go:334] "Generic (PLEG): container finished" podID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerID="a9ca7ea6a36517f6ed8d873da3f09cfd2900e4637a423450dbb74eb0d7a14e46" exitCode=0 Nov 22 00:32:46 crc kubenswrapper[4928]: I1122 00:32:46.530251 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdr9x" event={"ID":"37bf28c5-f277-4d11-aa26-3be7828432ee","Type":"ContainerDied","Data":"a9ca7ea6a36517f6ed8d873da3f09cfd2900e4637a423450dbb74eb0d7a14e46"} Nov 22 00:32:47 crc kubenswrapper[4928]: I1122 00:32:47.540047 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdr9x" event={"ID":"37bf28c5-f277-4d11-aa26-3be7828432ee","Type":"ContainerStarted","Data":"fcd32b70ae7773504d6f9501644ea7fade7aa7206684f5ffeb37eb8883d0cd72"} Nov 22 00:32:47 crc kubenswrapper[4928]: I1122 00:32:47.556535 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bdr9x" podStartSLOduration=3.118637828 podStartE2EDuration="4.55651473s" podCreationTimestamp="2025-11-22 00:32:43 +0000 UTC" firstStartedPulling="2025-11-22 00:32:45.519267592 +0000 UTC m=+1600.807298803" lastFinishedPulling="2025-11-22 00:32:46.957144524 +0000 UTC m=+1602.245175705" observedRunningTime="2025-11-22 00:32:47.554060104 +0000 UTC m=+1602.842091335" watchObservedRunningTime="2025-11-22 00:32:47.55651473 +0000 UTC m=+1602.844545921" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.032752 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-4qdnt"] Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.034255 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.043534 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.043593 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.043725 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.043930 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.043986 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.044300 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.052384 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-4qdnt"] Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.191684 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-healthcheck-log\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.192272 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.192331 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.192362 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.192389 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bjt\" (UniqueName: \"kubernetes.io/projected/e2f3b92b-b265-413b-ab21-6836aab7918d-kube-api-access-w6bjt\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.192430 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-sensubility-config\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.192475 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-config\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.293674 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-sensubility-config\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.294116 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-config\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.294316 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-healthcheck-log\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.294444 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.294572 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.294692 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.294651 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-sensubility-config\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.294832 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bjt\" (UniqueName: \"kubernetes.io/projected/e2f3b92b-b265-413b-ab21-6836aab7918d-kube-api-access-w6bjt\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.295133 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-healthcheck-log\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.295260 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.295345 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.295535 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.295895 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-config\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.316108 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bjt\" (UniqueName: \"kubernetes.io/projected/e2f3b92b-b265-413b-ab21-6836aab7918d-kube-api-access-w6bjt\") pod \"stf-smoketest-smoke1-4qdnt\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.353180 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:32:52 crc kubenswrapper[4928]: I1122 00:32:52.613997 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-4qdnt"] Nov 22 00:32:53 crc kubenswrapper[4928]: I1122 00:32:53.584535 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" event={"ID":"e2f3b92b-b265-413b-ab21-6836aab7918d","Type":"ContainerStarted","Data":"8f1fc113c2a9aac48a6b9a4472ba972b40dff9abc33e9b31a1cd813caffd26e9"} Nov 22 00:32:53 crc kubenswrapper[4928]: I1122 00:32:53.585015 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" event={"ID":"e2f3b92b-b265-413b-ab21-6836aab7918d","Type":"ContainerStarted","Data":"acc11529fbceda60c5341f57b4a6453f2c845f22caaee6d4d6e762a48a17eb0b"} Nov 22 00:32:53 crc kubenswrapper[4928]: I1122 00:32:53.585036 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" event={"ID":"e2f3b92b-b265-413b-ab21-6836aab7918d","Type":"ContainerStarted","Data":"055137fe37cb289847d47d982793d2cea37f7a27f00a9ba0bb75133701984209"} Nov 22 00:32:53 crc kubenswrapper[4928]: I1122 00:32:53.610823 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" podStartSLOduration=1.6108074650000002 podStartE2EDuration="1.610807465s" podCreationTimestamp="2025-11-22 00:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:32:53.608520022 +0000 UTC m=+1608.896551233" watchObservedRunningTime="2025-11-22 00:32:53.610807465 +0000 UTC m=+1608.898838656" Nov 22 00:32:54 crc kubenswrapper[4928]: I1122 00:32:54.288678 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:54 crc kubenswrapper[4928]: I1122 00:32:54.288717 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:54 crc kubenswrapper[4928]: I1122 00:32:54.336521 4928 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:54 crc kubenswrapper[4928]: I1122 00:32:54.642293 4928 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:54 crc kubenswrapper[4928]: I1122 00:32:54.689836 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdr9x"] Nov 22 00:32:56 crc kubenswrapper[4928]: I1122 00:32:56.625954 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bdr9x" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="registry-server" containerID="cri-o://fcd32b70ae7773504d6f9501644ea7fade7aa7206684f5ffeb37eb8883d0cd72" gracePeriod=2 Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.634061 4928 generic.go:334] "Generic (PLEG): container finished" podID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerID="fcd32b70ae7773504d6f9501644ea7fade7aa7206684f5ffeb37eb8883d0cd72" exitCode=0 Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.634148 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdr9x" event={"ID":"37bf28c5-f277-4d11-aa26-3be7828432ee","Type":"ContainerDied","Data":"fcd32b70ae7773504d6f9501644ea7fade7aa7206684f5ffeb37eb8883d0cd72"} Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.810564 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.990531 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-utilities\") pod \"37bf28c5-f277-4d11-aa26-3be7828432ee\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.990588 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69fvn\" (UniqueName: \"kubernetes.io/projected/37bf28c5-f277-4d11-aa26-3be7828432ee-kube-api-access-69fvn\") pod \"37bf28c5-f277-4d11-aa26-3be7828432ee\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.990632 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-catalog-content\") pod \"37bf28c5-f277-4d11-aa26-3be7828432ee\" (UID: \"37bf28c5-f277-4d11-aa26-3be7828432ee\") " Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.991564 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-utilities" (OuterVolumeSpecName: "utilities") pod "37bf28c5-f277-4d11-aa26-3be7828432ee" (UID: "37bf28c5-f277-4d11-aa26-3be7828432ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:32:57 crc kubenswrapper[4928]: I1122 00:32:57.996774 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37bf28c5-f277-4d11-aa26-3be7828432ee-kube-api-access-69fvn" (OuterVolumeSpecName: "kube-api-access-69fvn") pod "37bf28c5-f277-4d11-aa26-3be7828432ee" (UID: "37bf28c5-f277-4d11-aa26-3be7828432ee"). InnerVolumeSpecName "kube-api-access-69fvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.042646 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37bf28c5-f277-4d11-aa26-3be7828432ee" (UID: "37bf28c5-f277-4d11-aa26-3be7828432ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.092586 4928 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.092622 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69fvn\" (UniqueName: \"kubernetes.io/projected/37bf28c5-f277-4d11-aa26-3be7828432ee-kube-api-access-69fvn\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.092636 4928 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37bf28c5-f277-4d11-aa26-3be7828432ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.644456 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdr9x" event={"ID":"37bf28c5-f277-4d11-aa26-3be7828432ee","Type":"ContainerDied","Data":"e0584abb30b2935427f6cee25eccc0cacc0988fc235480e3199d64130706c048"} Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.644739 4928 scope.go:117] "RemoveContainer" containerID="fcd32b70ae7773504d6f9501644ea7fade7aa7206684f5ffeb37eb8883d0cd72" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.644545 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdr9x" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.662939 4928 scope.go:117] "RemoveContainer" containerID="a9ca7ea6a36517f6ed8d873da3f09cfd2900e4637a423450dbb74eb0d7a14e46" Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.687979 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdr9x"] Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.699931 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bdr9x"] Nov 22 00:32:58 crc kubenswrapper[4928]: I1122 00:32:58.706435 4928 scope.go:117] "RemoveContainer" containerID="f426253cee2718948de40acd398674da12f96dc43b71ccc1d6d7a000e3feb700" Nov 22 00:32:59 crc kubenswrapper[4928]: I1122 00:32:59.641652 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" path="/var/lib/kubelet/pods/37bf28c5-f277-4d11-aa26-3be7828432ee/volumes" Nov 22 00:33:06 crc kubenswrapper[4928]: I1122 00:33:06.707692 4928 patch_prober.go:28] interesting pod/machine-config-daemon-hp7cp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 00:33:06 crc kubenswrapper[4928]: I1122 00:33:06.708564 4928 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 00:33:06 crc kubenswrapper[4928]: I1122 00:33:06.708643 4928 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" Nov 22 00:33:06 crc kubenswrapper[4928]: I1122 00:33:06.709760 4928 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030"} pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 00:33:06 crc kubenswrapper[4928]: I1122 00:33:06.709923 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerName="machine-config-daemon" containerID="cri-o://282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" gracePeriod=600 Nov 22 00:33:06 crc kubenswrapper[4928]: E1122 00:33:06.841470 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:33:07 crc kubenswrapper[4928]: I1122 00:33:07.726775 4928 generic.go:334] "Generic (PLEG): container finished" podID="897ce64d-0600-4c6a-b684-0c1683fa14bc" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" exitCode=0 Nov 22 00:33:07 crc kubenswrapper[4928]: I1122 00:33:07.726884 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerDied","Data":"282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030"} Nov 22 00:33:07 crc kubenswrapper[4928]: I1122 00:33:07.726955 4928 scope.go:117] "RemoveContainer" containerID="2313a8e1c327d21680589179d678b34e89c05b2d249408c247573805e438ec91" Nov 22 00:33:07 crc kubenswrapper[4928]: I1122 00:33:07.727824 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:33:07 crc kubenswrapper[4928]: E1122 00:33:07.728210 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:33:16 crc kubenswrapper[4928]: I1122 00:33:16.452457 4928 scope.go:117] "RemoveContainer" containerID="10268f22d0e306138169015af75251c9503001f368bc42c7f8a600cb782361d1" Nov 22 00:33:16 crc kubenswrapper[4928]: I1122 00:33:16.498286 4928 scope.go:117] "RemoveContainer" containerID="c45119389d33a3f0d267b4fa97e5d38b511435434cff6e219e8dfc4763901744" Nov 22 00:33:19 crc kubenswrapper[4928]: I1122 00:33:19.618465 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:33:19 crc kubenswrapper[4928]: E1122 00:33:19.618882 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:33:24 crc kubenswrapper[4928]: I1122 00:33:24.862161 4928 generic.go:334] "Generic (PLEG): container finished" podID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerID="8f1fc113c2a9aac48a6b9a4472ba972b40dff9abc33e9b31a1cd813caffd26e9" exitCode=0 Nov 22 00:33:24 crc kubenswrapper[4928]: I1122 00:33:24.862224 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" event={"ID":"e2f3b92b-b265-413b-ab21-6836aab7918d","Type":"ContainerDied","Data":"8f1fc113c2a9aac48a6b9a4472ba972b40dff9abc33e9b31a1cd813caffd26e9"} Nov 22 00:33:24 crc kubenswrapper[4928]: I1122 00:33:24.864152 4928 scope.go:117] "RemoveContainer" containerID="8f1fc113c2a9aac48a6b9a4472ba972b40dff9abc33e9b31a1cd813caffd26e9" Nov 22 00:33:26 crc kubenswrapper[4928]: I1122 00:33:26.880416 4928 generic.go:334] "Generic (PLEG): container finished" podID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerID="acc11529fbceda60c5341f57b4a6453f2c845f22caaee6d4d6e762a48a17eb0b" exitCode=0 Nov 22 00:33:26 crc kubenswrapper[4928]: I1122 00:33:26.880489 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" event={"ID":"e2f3b92b-b265-413b-ab21-6836aab7918d","Type":"ContainerDied","Data":"acc11529fbceda60c5341f57b4a6453f2c845f22caaee6d4d6e762a48a17eb0b"} Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.109537 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.249179 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-entrypoint-script\") pod \"e2f3b92b-b265-413b-ab21-6836aab7918d\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.249248 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-entrypoint-script\") pod \"e2f3b92b-b265-413b-ab21-6836aab7918d\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.249369 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6bjt\" (UniqueName: \"kubernetes.io/projected/e2f3b92b-b265-413b-ab21-6836aab7918d-kube-api-access-w6bjt\") pod \"e2f3b92b-b265-413b-ab21-6836aab7918d\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.249432 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-config\") pod \"e2f3b92b-b265-413b-ab21-6836aab7918d\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.249480 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-healthcheck-log\") pod \"e2f3b92b-b265-413b-ab21-6836aab7918d\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.249544 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-sensubility-config\") pod \"e2f3b92b-b265-413b-ab21-6836aab7918d\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.249624 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-publisher\") pod \"e2f3b92b-b265-413b-ab21-6836aab7918d\" (UID: \"e2f3b92b-b265-413b-ab21-6836aab7918d\") " Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.256028 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f3b92b-b265-413b-ab21-6836aab7918d-kube-api-access-w6bjt" (OuterVolumeSpecName: "kube-api-access-w6bjt") pod "e2f3b92b-b265-413b-ab21-6836aab7918d" (UID: "e2f3b92b-b265-413b-ab21-6836aab7918d"). InnerVolumeSpecName "kube-api-access-w6bjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.265588 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "e2f3b92b-b265-413b-ab21-6836aab7918d" (UID: "e2f3b92b-b265-413b-ab21-6836aab7918d"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.267748 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "e2f3b92b-b265-413b-ab21-6836aab7918d" (UID: "e2f3b92b-b265-413b-ab21-6836aab7918d"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.277647 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "e2f3b92b-b265-413b-ab21-6836aab7918d" (UID: "e2f3b92b-b265-413b-ab21-6836aab7918d"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.278225 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "e2f3b92b-b265-413b-ab21-6836aab7918d" (UID: "e2f3b92b-b265-413b-ab21-6836aab7918d"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.282030 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "e2f3b92b-b265-413b-ab21-6836aab7918d" (UID: "e2f3b92b-b265-413b-ab21-6836aab7918d"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.285670 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "e2f3b92b-b265-413b-ab21-6836aab7918d" (UID: "e2f3b92b-b265-413b-ab21-6836aab7918d"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.352667 4928 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.352748 4928 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.352769 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6bjt\" (UniqueName: \"kubernetes.io/projected/e2f3b92b-b265-413b-ab21-6836aab7918d-kube-api-access-w6bjt\") on node \"crc\" DevicePath \"\"" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.352806 4928 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-collectd-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.352823 4928 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-healthcheck-log\") on node \"crc\" DevicePath \"\"" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.352870 4928 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-sensubility-config\") on node \"crc\" DevicePath \"\"" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.352887 4928 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e2f3b92b-b265-413b-ab21-6836aab7918d-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.900081 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" event={"ID":"e2f3b92b-b265-413b-ab21-6836aab7918d","Type":"ContainerDied","Data":"055137fe37cb289847d47d982793d2cea37f7a27f00a9ba0bb75133701984209"} Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.900140 4928 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="055137fe37cb289847d47d982793d2cea37f7a27f00a9ba0bb75133701984209" Nov 22 00:33:28 crc kubenswrapper[4928]: I1122 00:33:28.900210 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4qdnt" Nov 22 00:33:30 crc kubenswrapper[4928]: I1122 00:33:30.049989 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-4qdnt_e2f3b92b-b265-413b-ab21-6836aab7918d/smoketest-collectd/0.log" Nov 22 00:33:30 crc kubenswrapper[4928]: I1122 00:33:30.286058 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-4qdnt_e2f3b92b-b265-413b-ab21-6836aab7918d/smoketest-ceilometer/0.log" Nov 22 00:33:30 crc kubenswrapper[4928]: I1122 00:33:30.542880 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-t79jj_10247c42-bfd9-4ff3-967f-917f79dd5ac7/default-interconnect/0.log" Nov 22 00:33:30 crc kubenswrapper[4928]: I1122 00:33:30.785189 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx_733293ab-7999-45c4-a8fe-9459eed18b33/bridge/2.log" Nov 22 00:33:31 crc kubenswrapper[4928]: I1122 00:33:31.023672 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-8w9xx_733293ab-7999-45c4-a8fe-9459eed18b33/sg-core/0.log" Nov 22 00:33:31 crc kubenswrapper[4928]: I1122 00:33:31.289829 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6_d3ef7e4d-312d-4763-95e9-6245ae074e73/bridge/2.log" Nov 22 00:33:31 crc kubenswrapper[4928]: I1122 00:33:31.544534 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5fb94f46-b4qw6_d3ef7e4d-312d-4763-95e9-6245ae074e73/sg-core/0.log" Nov 22 00:33:31 crc kubenswrapper[4928]: I1122 00:33:31.791267 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn_c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714/bridge/2.log" Nov 22 00:33:32 crc kubenswrapper[4928]: I1122 00:33:32.081205 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-89rrn_c8f6cfc6-a08e-4dfe-b7d9-8b9b0c2a4714/sg-core/0.log" Nov 22 00:33:32 crc kubenswrapper[4928]: I1122 00:33:32.360767 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z_1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d/bridge/2.log" Nov 22 00:33:32 crc kubenswrapper[4928]: I1122 00:33:32.652228 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-57d5d69684-nwp6z_1778aa32-33c9-4f2c-bfdd-51fc8dc1c25d/sg-core/0.log" Nov 22 00:33:32 crc kubenswrapper[4928]: I1122 00:33:32.992482 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb_d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4/bridge/2.log" Nov 22 00:33:33 crc kubenswrapper[4928]: I1122 00:33:33.309179 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6drxb_d9d56bc9-a4c5-4dea-a26f-e5f4df6667f4/sg-core/0.log" Nov 22 00:33:33 crc kubenswrapper[4928]: I1122 00:33:33.617837 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:33:33 crc kubenswrapper[4928]: E1122 00:33:33.618132 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:33:36 crc kubenswrapper[4928]: I1122 00:33:36.324383 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-7c8ffb5b8-v9769_c1a256b5-b848-45ed-99e1-d01613414391/operator/0.log" Nov 22 00:33:36 crc kubenswrapper[4928]: I1122 00:33:36.588534 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_a1477d3a-da77-4acc-8bd0-7f5e5cdf32ad/prometheus/0.log" Nov 22 00:33:36 crc kubenswrapper[4928]: I1122 00:33:36.863397 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_611dc6e0-3341-4770-ba1d-9f8e626fb414/elasticsearch/0.log" Nov 22 00:33:37 crc kubenswrapper[4928]: I1122 00:33:37.117761 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-8qmqr_15489fd9-279f-4b12-9a61-54fc3a4ee40b/prometheus-webhook-snmp/0.log" Nov 22 00:33:37 crc kubenswrapper[4928]: I1122 00:33:37.421737 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_fcbba063-eb3a-45ff-bcfb-b50cfe532ee2/alertmanager/0.log" Nov 22 00:33:48 crc kubenswrapper[4928]: I1122 00:33:48.618059 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:33:48 crc kubenswrapper[4928]: E1122 00:33:48.619117 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:33:52 crc kubenswrapper[4928]: I1122 00:33:52.128042 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-547448897-cv9jb_aed83cda-231a-4eb5-8640-1d190d97595c/operator/0.log" Nov 22 00:33:55 crc kubenswrapper[4928]: I1122 00:33:55.114314 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-7c8ffb5b8-v9769_c1a256b5-b848-45ed-99e1-d01613414391/operator/0.log" Nov 22 00:33:55 crc kubenswrapper[4928]: I1122 00:33:55.371976 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_23cb0972-ff54-4c17-9721-4dd1ec13f9a4/qdr/0.log" Nov 22 00:34:02 crc kubenswrapper[4928]: I1122 00:34:02.617889 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:34:02 crc kubenswrapper[4928]: E1122 00:34:02.618450 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:34:17 crc kubenswrapper[4928]: I1122 00:34:17.618471 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:34:17 crc kubenswrapper[4928]: E1122 00:34:17.619280 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.405376 4928 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zxgt9/must-gather-8mh4x"] Nov 22 00:34:20 crc kubenswrapper[4928]: E1122 00:34:20.405878 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerName="smoketest-collectd" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.405891 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerName="smoketest-collectd" Nov 22 00:34:20 crc kubenswrapper[4928]: E1122 00:34:20.405913 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="extract-utilities" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.405918 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="extract-utilities" Nov 22 00:34:20 crc kubenswrapper[4928]: E1122 00:34:20.405929 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerName="smoketest-ceilometer" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.405935 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerName="smoketest-ceilometer" Nov 22 00:34:20 crc kubenswrapper[4928]: E1122 00:34:20.405945 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="registry-server" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.405951 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="registry-server" Nov 22 00:34:20 crc kubenswrapper[4928]: E1122 00:34:20.405959 4928 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="extract-content" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.405964 4928 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="extract-content" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.406064 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerName="smoketest-collectd" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.406078 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bf28c5-f277-4d11-aa26-3be7828432ee" containerName="registry-server" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.406088 4928 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f3b92b-b265-413b-ab21-6836aab7918d" containerName="smoketest-ceilometer" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.406806 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.409593 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zxgt9"/"kube-root-ca.crt" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.409868 4928 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zxgt9"/"openshift-service-ca.crt" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.411565 4928 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zxgt9"/"default-dockercfg-szrv8" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.424148 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zxgt9/must-gather-8mh4x"] Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.454624 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbr2\" (UniqueName: \"kubernetes.io/projected/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-kube-api-access-qnbr2\") pod \"must-gather-8mh4x\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.454710 4928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-must-gather-output\") pod \"must-gather-8mh4x\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.555672 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbr2\" (UniqueName: \"kubernetes.io/projected/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-kube-api-access-qnbr2\") pod \"must-gather-8mh4x\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.555754 4928 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-must-gather-output\") pod \"must-gather-8mh4x\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.556384 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-must-gather-output\") pod \"must-gather-8mh4x\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.576390 4928 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbr2\" (UniqueName: \"kubernetes.io/projected/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-kube-api-access-qnbr2\") pod \"must-gather-8mh4x\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:20 crc kubenswrapper[4928]: I1122 00:34:20.728415 4928 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:34:21 crc kubenswrapper[4928]: I1122 00:34:21.120401 4928 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zxgt9/must-gather-8mh4x"] Nov 22 00:34:21 crc kubenswrapper[4928]: I1122 00:34:21.290781 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" event={"ID":"cf1f5706-34f5-4dd2-92df-325a50b9a4e5","Type":"ContainerStarted","Data":"51c0a19660bcb3321ae41264b1481f2717005b43c1249fa915e1b9ad524e41e7"} Nov 22 00:34:28 crc kubenswrapper[4928]: I1122 00:34:28.348137 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" event={"ID":"cf1f5706-34f5-4dd2-92df-325a50b9a4e5","Type":"ContainerStarted","Data":"4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad"} Nov 22 00:34:29 crc kubenswrapper[4928]: I1122 00:34:29.356267 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" event={"ID":"cf1f5706-34f5-4dd2-92df-325a50b9a4e5","Type":"ContainerStarted","Data":"f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd"} Nov 22 00:34:29 crc kubenswrapper[4928]: I1122 00:34:29.374525 4928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" podStartSLOduration=2.428614489 podStartE2EDuration="9.374509618s" podCreationTimestamp="2025-11-22 00:34:20 +0000 UTC" firstStartedPulling="2025-11-22 00:34:21.128228098 +0000 UTC m=+1696.416259289" lastFinishedPulling="2025-11-22 00:34:28.074123227 +0000 UTC m=+1703.362154418" observedRunningTime="2025-11-22 00:34:29.372275597 +0000 UTC m=+1704.660306788" watchObservedRunningTime="2025-11-22 00:34:29.374509618 +0000 UTC m=+1704.662540819" Nov 22 00:34:32 crc kubenswrapper[4928]: I1122 00:34:32.618282 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:34:32 crc kubenswrapper[4928]: E1122 00:34:32.618813 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:34:47 crc kubenswrapper[4928]: I1122 00:34:47.618060 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:34:47 crc kubenswrapper[4928]: E1122 00:34:47.619021 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:35:00 crc kubenswrapper[4928]: I1122 00:35:00.618455 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:35:00 crc kubenswrapper[4928]: E1122 00:35:00.619073 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:35:04 crc kubenswrapper[4928]: I1122 00:35:04.047967 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7xwgw_cef7a6ad-8a53-415c-b18e-098c96f5c630/control-plane-machine-set-operator/0.log" Nov 22 00:35:04 crc kubenswrapper[4928]: I1122 00:35:04.192562 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wqtln_4a07cf7e-773c-46ea-989f-ec01540e81ee/kube-rbac-proxy/0.log" Nov 22 00:35:04 crc kubenswrapper[4928]: I1122 00:35:04.212999 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wqtln_4a07cf7e-773c-46ea-989f-ec01540e81ee/machine-api-operator/0.log" Nov 22 00:35:14 crc kubenswrapper[4928]: I1122 00:35:14.444987 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-ft6w9_0042c8ac-21bd-4461-b86d-2d35d8135b42/cert-manager-controller/0.log" Nov 22 00:35:14 crc kubenswrapper[4928]: I1122 00:35:14.596418 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-jpw42_c2248cb6-0cd4-404e-bb44-b6736ec6149b/cert-manager-cainjector/0.log" Nov 22 00:35:14 crc kubenswrapper[4928]: I1122 00:35:14.596854 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-92wzm_8f2f9770-1e25-466d-853b-04fcf1ec872a/cert-manager-webhook/0.log" Nov 22 00:35:15 crc kubenswrapper[4928]: I1122 00:35:15.621757 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:35:15 crc kubenswrapper[4928]: E1122 00:35:15.622084 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.426197 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc_136db455-41e9-435d-8628-2674862c3ec9/util/0.log" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.612260 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc_136db455-41e9-435d-8628-2674862c3ec9/util/0.log" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.617643 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:35:28 crc kubenswrapper[4928]: E1122 00:35:28.617923 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.628737 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc_136db455-41e9-435d-8628-2674862c3ec9/pull/0.log" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.659715 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc_136db455-41e9-435d-8628-2674862c3ec9/pull/0.log" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.787739 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc_136db455-41e9-435d-8628-2674862c3ec9/util/0.log" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.842155 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc_136db455-41e9-435d-8628-2674862c3ec9/extract/0.log" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.855772 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931artwhc_136db455-41e9-435d-8628-2674862c3ec9/pull/0.log" Nov 22 00:35:28 crc kubenswrapper[4928]: I1122 00:35:28.944916 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv_c5d282e4-90d5-40c4-a8a2-2a34a55fe743/util/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.123148 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv_c5d282e4-90d5-40c4-a8a2-2a34a55fe743/pull/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.135954 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv_c5d282e4-90d5-40c4-a8a2-2a34a55fe743/util/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.140896 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv_c5d282e4-90d5-40c4-a8a2-2a34a55fe743/pull/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.293722 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv_c5d282e4-90d5-40c4-a8a2-2a34a55fe743/util/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.312756 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv_c5d282e4-90d5-40c4-a8a2-2a34a55fe743/pull/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.333742 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gpmjv_c5d282e4-90d5-40c4-a8a2-2a34a55fe743/extract/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.450947 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6_3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0/util/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.598219 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6_3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0/util/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.640221 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6_3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0/pull/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.695222 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6_3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0/pull/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.753817 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6_3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0/pull/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.760638 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6_3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0/util/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.808775 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwpkd6_3dc2e4f6-10df-4986-a6a8-16d3fd6c9ca0/extract/0.log" Nov 22 00:35:29 crc kubenswrapper[4928]: I1122 00:35:29.940057 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh_3c84ac41-e8e8-486b-ad01-5e453477cf90/util/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.042323 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh_3c84ac41-e8e8-486b-ad01-5e453477cf90/pull/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.057440 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh_3c84ac41-e8e8-486b-ad01-5e453477cf90/pull/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.060765 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh_3c84ac41-e8e8-486b-ad01-5e453477cf90/util/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.218310 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh_3c84ac41-e8e8-486b-ad01-5e453477cf90/util/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.228065 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh_3c84ac41-e8e8-486b-ad01-5e453477cf90/pull/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.245208 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ew2rlh_3c84ac41-e8e8-486b-ad01-5e453477cf90/extract/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.371455 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r97th_e291904d-64aa-49d5-9cef-fddedca94cc9/extract-utilities/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.540042 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r97th_e291904d-64aa-49d5-9cef-fddedca94cc9/extract-content/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.555951 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r97th_e291904d-64aa-49d5-9cef-fddedca94cc9/extract-content/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.556912 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r97th_e291904d-64aa-49d5-9cef-fddedca94cc9/extract-utilities/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.696750 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r97th_e291904d-64aa-49d5-9cef-fddedca94cc9/extract-utilities/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.704073 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r97th_e291904d-64aa-49d5-9cef-fddedca94cc9/extract-content/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.929938 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5qv65_99661af0-b4f1-4678-b859-2e5f0a3af123/extract-utilities/0.log" Nov 22 00:35:30 crc kubenswrapper[4928]: I1122 00:35:30.975425 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r97th_e291904d-64aa-49d5-9cef-fddedca94cc9/registry-server/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.050431 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5qv65_99661af0-b4f1-4678-b859-2e5f0a3af123/extract-utilities/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.100567 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5qv65_99661af0-b4f1-4678-b859-2e5f0a3af123/extract-content/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.100584 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5qv65_99661af0-b4f1-4678-b859-2e5f0a3af123/extract-content/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.251532 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5qv65_99661af0-b4f1-4678-b859-2e5f0a3af123/extract-content/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.258876 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5qv65_99661af0-b4f1-4678-b859-2e5f0a3af123/extract-utilities/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.437727 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8v67p_19f06771-3add-4b82-a14e-a3395bb04a75/marketplace-operator/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.485082 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5qv65_99661af0-b4f1-4678-b859-2e5f0a3af123/registry-server/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.534527 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s2zz_f9f5d528-c852-4804-8ac3-9c3fd00cc1d3/extract-utilities/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.643311 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s2zz_f9f5d528-c852-4804-8ac3-9c3fd00cc1d3/extract-utilities/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.647106 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s2zz_f9f5d528-c852-4804-8ac3-9c3fd00cc1d3/extract-content/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.663660 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s2zz_f9f5d528-c852-4804-8ac3-9c3fd00cc1d3/extract-content/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.804426 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s2zz_f9f5d528-c852-4804-8ac3-9c3fd00cc1d3/extract-utilities/0.log" Nov 22 00:35:31 crc kubenswrapper[4928]: I1122 00:35:31.820279 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s2zz_f9f5d528-c852-4804-8ac3-9c3fd00cc1d3/extract-content/0.log" Nov 22 00:35:32 crc kubenswrapper[4928]: I1122 00:35:32.036259 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s2zz_f9f5d528-c852-4804-8ac3-9c3fd00cc1d3/registry-server/0.log" Nov 22 00:35:43 crc kubenswrapper[4928]: I1122 00:35:43.618013 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:35:43 crc kubenswrapper[4928]: E1122 00:35:43.618561 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:35:43 crc kubenswrapper[4928]: I1122 00:35:43.715098 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-bxsxq_f09d8639-7745-440c-a21b-5008c071fd0b/prometheus-operator/0.log" Nov 22 00:35:43 crc kubenswrapper[4928]: I1122 00:35:43.852531 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66ddb79c8b-t6kn4_645d1f1b-9079-4a84-bf38-f1029dd01b07/prometheus-operator-admission-webhook/0.log" Nov 22 00:35:43 crc kubenswrapper[4928]: I1122 00:35:43.863716 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66ddb79c8b-lxzbv_2dd2bccc-816e-41de-906e-3541bfebaa24/prometheus-operator-admission-webhook/0.log" Nov 22 00:35:44 crc kubenswrapper[4928]: I1122 00:35:44.003507 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-ntd99_c771d376-6f26-4072-936f-e8a81dfa2596/operator/0.log" Nov 22 00:35:44 crc kubenswrapper[4928]: I1122 00:35:44.087661 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-w6578_6ed2e567-35f6-4eba-992b-0e94c74db92e/perses-operator/0.log" Nov 22 00:35:54 crc kubenswrapper[4928]: I1122 00:35:54.617854 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:35:54 crc kubenswrapper[4928]: E1122 00:35:54.618532 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:36:06 crc kubenswrapper[4928]: I1122 00:36:06.618053 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:36:06 crc kubenswrapper[4928]: E1122 00:36:06.618776 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:36:21 crc kubenswrapper[4928]: I1122 00:36:21.618701 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:36:21 crc kubenswrapper[4928]: E1122 00:36:21.619282 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:36:28 crc kubenswrapper[4928]: I1122 00:36:28.274199 4928 generic.go:334] "Generic (PLEG): container finished" podID="cf1f5706-34f5-4dd2-92df-325a50b9a4e5" containerID="4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad" exitCode=0 Nov 22 00:36:28 crc kubenswrapper[4928]: I1122 00:36:28.274292 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" event={"ID":"cf1f5706-34f5-4dd2-92df-325a50b9a4e5","Type":"ContainerDied","Data":"4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad"} Nov 22 00:36:28 crc kubenswrapper[4928]: I1122 00:36:28.275541 4928 scope.go:117] "RemoveContainer" containerID="4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad" Nov 22 00:36:28 crc kubenswrapper[4928]: I1122 00:36:28.373711 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zxgt9_must-gather-8mh4x_cf1f5706-34f5-4dd2-92df-325a50b9a4e5/gather/0.log" Nov 22 00:36:34 crc kubenswrapper[4928]: I1122 00:36:34.618668 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:36:34 crc kubenswrapper[4928]: E1122 00:36:34.619435 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.490400 4928 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zxgt9/must-gather-8mh4x"] Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.491324 4928 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" podUID="cf1f5706-34f5-4dd2-92df-325a50b9a4e5" containerName="copy" containerID="cri-o://f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd" gracePeriod=2 Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.514810 4928 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zxgt9/must-gather-8mh4x"] Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.898313 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zxgt9_must-gather-8mh4x_cf1f5706-34f5-4dd2-92df-325a50b9a4e5/copy/0.log" Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.899021 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.955505 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-must-gather-output\") pod \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.955573 4928 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbr2\" (UniqueName: \"kubernetes.io/projected/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-kube-api-access-qnbr2\") pod \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\" (UID: \"cf1f5706-34f5-4dd2-92df-325a50b9a4e5\") " Nov 22 00:36:35 crc kubenswrapper[4928]: I1122 00:36:35.964785 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-kube-api-access-qnbr2" (OuterVolumeSpecName: "kube-api-access-qnbr2") pod "cf1f5706-34f5-4dd2-92df-325a50b9a4e5" (UID: "cf1f5706-34f5-4dd2-92df-325a50b9a4e5"). InnerVolumeSpecName "kube-api-access-qnbr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.029290 4928 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cf1f5706-34f5-4dd2-92df-325a50b9a4e5" (UID: "cf1f5706-34f5-4dd2-92df-325a50b9a4e5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.057775 4928 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.057830 4928 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnbr2\" (UniqueName: \"kubernetes.io/projected/cf1f5706-34f5-4dd2-92df-325a50b9a4e5-kube-api-access-qnbr2\") on node \"crc\" DevicePath \"\"" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.325874 4928 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zxgt9_must-gather-8mh4x_cf1f5706-34f5-4dd2-92df-325a50b9a4e5/copy/0.log" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.326636 4928 generic.go:334] "Generic (PLEG): container finished" podID="cf1f5706-34f5-4dd2-92df-325a50b9a4e5" containerID="f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd" exitCode=143 Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.326721 4928 scope.go:117] "RemoveContainer" containerID="f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.326778 4928 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxgt9/must-gather-8mh4x" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.370036 4928 scope.go:117] "RemoveContainer" containerID="4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.428999 4928 scope.go:117] "RemoveContainer" containerID="f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd" Nov 22 00:36:36 crc kubenswrapper[4928]: E1122 00:36:36.429400 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd\": container with ID starting with f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd not found: ID does not exist" containerID="f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.429446 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd"} err="failed to get container status \"f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd\": rpc error: code = NotFound desc = could not find container \"f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd\": container with ID starting with f0cffac02d42e88c55f5ba8150453bf8abd3be169938fa6896757847966af6dd not found: ID does not exist" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.429473 4928 scope.go:117] "RemoveContainer" containerID="4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad" Nov 22 00:36:36 crc kubenswrapper[4928]: E1122 00:36:36.429774 4928 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad\": container with ID starting with 4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad not found: ID does not exist" containerID="4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad" Nov 22 00:36:36 crc kubenswrapper[4928]: I1122 00:36:36.429834 4928 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad"} err="failed to get container status \"4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad\": rpc error: code = NotFound desc = could not find container \"4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad\": container with ID starting with 4f77f89770e7a3321f3320f522726b7e31e6cd815a226b34a46041ee914d40ad not found: ID does not exist" Nov 22 00:36:37 crc kubenswrapper[4928]: I1122 00:36:37.627289 4928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1f5706-34f5-4dd2-92df-325a50b9a4e5" path="/var/lib/kubelet/pods/cf1f5706-34f5-4dd2-92df-325a50b9a4e5/volumes" Nov 22 00:36:49 crc kubenswrapper[4928]: I1122 00:36:49.617710 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:36:49 crc kubenswrapper[4928]: E1122 00:36:49.618299 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:37:03 crc kubenswrapper[4928]: I1122 00:37:03.618830 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:37:03 crc kubenswrapper[4928]: E1122 00:37:03.619556 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:37:16 crc kubenswrapper[4928]: I1122 00:37:16.619110 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:37:16 crc kubenswrapper[4928]: E1122 00:37:16.619943 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:37:29 crc kubenswrapper[4928]: I1122 00:37:29.618569 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:37:29 crc kubenswrapper[4928]: E1122 00:37:29.619727 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:37:40 crc kubenswrapper[4928]: I1122 00:37:40.618291 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:37:40 crc kubenswrapper[4928]: E1122 00:37:40.619105 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:37:52 crc kubenswrapper[4928]: I1122 00:37:52.619064 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:37:52 crc kubenswrapper[4928]: E1122 00:37:52.620409 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:38:06 crc kubenswrapper[4928]: I1122 00:38:06.618133 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:38:06 crc kubenswrapper[4928]: E1122 00:38:06.618767 4928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hp7cp_openshift-machine-config-operator(897ce64d-0600-4c6a-b684-0c1683fa14bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" podUID="897ce64d-0600-4c6a-b684-0c1683fa14bc" Nov 22 00:38:20 crc kubenswrapper[4928]: I1122 00:38:20.618521 4928 scope.go:117] "RemoveContainer" containerID="282a289cf12612849994dc9fccac42915b13cc2cdf7f7d4d5162c857fb9dc030" Nov 22 00:38:21 crc kubenswrapper[4928]: I1122 00:38:21.240357 4928 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hp7cp" event={"ID":"897ce64d-0600-4c6a-b684-0c1683fa14bc","Type":"ContainerStarted","Data":"7d9b1cf6d207bfe9fb42cf564cce720e47c3e16d2d6ea8eb4e150623efe995ab"}